
Recent observations of the Cosmic Microwave Background (CMB), particularly those provided by Planck, have detected some large-scale anomalies, with respect to what it is expected according to the standard cosmological model. With this in mind, this work presents a study on the possibility of defining new versions of the CMB temperature and $E$-mode polarization maps, which take into account the correlation/uncorrelation between them. These correlated and non-correlated parts can offer a new observable which could eventually increase the anomalous signal previously mentioned and, even, retrieve information about its unknown nature. For their obtention the construction and application of a Wiener filter to the CMB maps is needed. We study the impact that non-idealized maps (when they include instrumental noise, foreground residuals, or masked skies) can have when defining the filter and its efficiency. Another aspect to highlight, is that we have also studied the filter definition from given observations, without having to rely on previous knowledge given by a model. Finally, in an illustrative way, we analyse one possible application for the methodology to study one of the anomalies of the CMB: the lack of power on a large scale.
$\textbf{Keywords}$: Cosmic Microwave Background (CMB), Temperature, Polarization, Correlated / Uncorrelated Maps, CMB Anomalies.
Recientes observaciones del Fondo Cósmico de Microondas (FCM), particularmente las proporcionadas por Planck, han detectado una serie de anomalías a gran escala, con respecto a lo esperado según el modelo cosmológico estándar. Con esto en mente, este trabajo presenta un estudio sobre la posibilidad de definir mapas de la temperatura y del modo-$E$ de polarización del FCM, que presentan correlación/no-correlación el uno con el otro. La razón es que esta descomposición entre partes correlacionadas y no-correlacionadas, pueden ofrecer un nuevo observable capaz de incrementar la señal anómala previamente mencionada e, incluso, arrojar luz sobre su naturaleza. La obtención de estos mapas radica en la aplicación de un filtro tipo Wiener. Se estudia el impacto que pueden tener mapas no idealizados, (cuando incluyen ruido instrumental, residuos de contaminantes, o una observación incompleta del cielo) a la hora de definir el filtro y su eficiencia. Otro aspecto a destacar, es que se estudia la viabilidad de definir el filtro a base de unas observaciones dadas, sin tener que basarse en el conocimiento previo de un modelo que defina las propiedades estadísticas de las observaciones. Por último, y a modo de ilustración, se estudia la posible viabilidad de aplicar esta metodología para estudiar una de las anomalías del FCM: la falta de potencia a gran escala.
$\textbf{Palabras clave}$: Fondo Cósmico de Microondas (FCM), Temperatura, Polarización, Mapas Correlacionados / No-correlacionados, Anomalías del FCM.
## We import here all the packages we will need along the proyect:
import sys, platform, os
import numpy as np #Scientific computing package (multidimensional array object...)
import pandas as pd #Data analysis and manipulation tool
import matplotlib.pyplot as plt #Visualization of data plots, histograms and more
import seaborn as sns #Statistical data visualization
import astropy #Tools for performing astronomy and astrophysics
from astropy.utils.data import get_pkg_data_filename
from astropy.table import Table
from astropy.io import fits
from scipy import interpolate
# Expecific packages for CMB studies:
import camb #Calculate power spectra given the Cosmological Parameters solving Boltzmann equations
import healpy as hp #Useful to handle CMB maps, Cls <-> maps <-> alms ...
import pysm3 #Full-sky simulations of Galactic foregrounds
import pysm3.units as u
import pymaster as nmt #Compute full-sky angular cross-power spectra of masked
import warnings
warnings.filterwarnings("ignore")
path = '/home/laura/MFPyC/CMB/SOMdM_2020/files'
### We define the parameters to work with:
nside = 512
npix = hp.nside2npix(nside)
lmax = 3 * nside - 1
Anside = np.sqrt(4 * np.pi / npix) #Sqrt(area_nside=512)
size_alm = hp.sphtfunc.Alm.getsize(lmax)
nbmc = 100
fwhm = (30.0 * np.pi) / (60 * 180) # 30 arcmin to rads
l = np.arange(lmax + 1)
ells_nmt = np.arange(2, 3 * nside)
px = hp.sphtfunc.pixwin(nside, pol=True, lmax = lmax)
fw_TEB = hp.sphtfunc.gauss_beam(fwhm, lmax = lmax, pol = True)
###########################################
### Functions for computing chi2 values ###
###########################################
def cov_mat_cosmic_variance(l,cls,fsky=1):
'''
Computed the Cosmic Variance given the multipole moment (l), the spectrum and optionally the f_sky.
'''
cv_diagonal = (2/((2*l+1)*fsky))*cls**2
return cv_diagonal
def chi2(l,cl_th,cl_est,fsky=1,cov=np.array(0)):
'''
Computes the chi2 value given the multipole moment (l), the theoretical prediction and estimation for the
spectrum and optionally the f_sky and the covariance. If no f_sky indicated it will be assumed that we are
working with all sky maps (f_sky=1).
The covariance matrix can have more than 1 dimension, in this case the chi2 value will be computed as the usual
matrix product. If no covariance indicated it will be computed as an approximation as the Cosmic Variance.
'''
if cov.all() == 0 :
cosmic_var = np.array(cov_mat_cosmic_variance(l,cl_th,fsky))
chi2 = sum((cl_th-cl_est)**2/(cosmic_var))
return chi2, cosmic_var
else:
if (cov).ndim >= 2:
num = (cl_th - cl_est)
chi2 = np.matmul(np.matmul(num[np.newaxis], np.linalg.inv(cov)),num[...,np.newaxis])
return chi2[0,0]
if (cov).ndim == 1:
chi2 = sum((cl_th-cl_est)**2/(cov))
return chi2
### Personalized parameters for the plots:
plt.rcParams.update({'font.size': 2})
plt.rc('font', size=20) # controls default text sizes
plt.rc('axes', titlesize=20) # fontsize of the axes title
plt.rc('axes', labelsize=20) # fontsize of the x and y labels
plt.rc('xtick', labelsize=20) # fontsize of the tick labels
plt.rc('ytick', labelsize=20) # fontsize of the tick labels
plt.rc('legend', fontsize=25) # legend fontsize
plt.rc('figure', titlesize=20) # fontsize of the figure title
Disclaimer: Most likely there are more efficient ways to write the code, for example defining the functions in separate modules, parallel programming or defining more functions, but this is the best way I have found :)
I would also recommend to have a look on $\verb+HDF5+$ binary data format for storing huge data sets although I have been using $\verb+.fits+$ which I find more intuitive.
(I have commented some loops that take time to be run, uncomment when needed)
Right after the Big Bang, the temperature of the Universe was so high that radiation was extremely intense. This translated into an opaque Universe where photons and baryonic matter collided with each other, preventing the formation of nuclei and consequently, matter was a hot ionized plasma. Around $380000$ years after the Big Bang the energy, due to the expansion of the Universe, dropped enough so that the radiation decoupled from matter, leading to the constitution of the firsts atoms (mainly hidrogen). This period, when these first neutral atoms were formed, is known as $\textit{recombination epoch}$. When the Universe's temperature reached approximately $3000$ K, most protons had already recombined to form neutral atoms, which allowed the photons to travel freely. At that time the Universe became transparent. The radiation distribution of the decoupled photon follows a black-body spectrum, and corresponds nowadays to a temperature of $T_0 = 2.72548 \pm 0.00057$ K [Fixsen, 2009]. This isotropic radiation is known as the Cosmic Microwave Background (CMB).

In 1964 Penzias and Wilson discovered this relic radiation while trying to calibrate an antenna whose original purpose was to detect the reflection light bounced of Echo balloon satellites [Penzias, 1965]. They detected an homogeneous and uniform radiation compatible with a background noise of $\sim 3.5$ K that was first intepreted as the CMB radiation in [Dicke, 1965]. Years later, the CMB was measured at different frequencies with the FIRAS experiment [Wright, 1994] of the Cosmic Background Explorer (COBE) satellite, which verified that the radiation followed the black-body distribution theorized. This was a proof of the photon decoupling and an important evidence for the Big Bang theory. Besides, the DMR experiment [Smoot, 1992] of COBE satellite revealed that the CMB has deviations around $T_0$ on he order of $10^{-5}$ K. These anisotropies were later studied by the Wilkinson Microwave Anisotropy Probe (WMAP) [Bennet, 2003] and Planck [PlanckI, 2014] missions, which joined in 2001 and 2009 respectively. The fluctuations, although small, opened an interesting field of study, the CMB anisotropies. It is important to note that measuring them requires precise observations using sensitive detectors so that temperature differences smaller than $1$ part in $10^5$ can be determined.

Measurements of CMB temperature and polarization anisotropies encode lots of cosmological information such as the matter distribution or the Universe composition among others. Not only that, but they have resulted in the birth of Observational Cosmology and have also played a major role in establishing the standard $\Lambda$CDM cosmological model. The standard model of cosmology is a six-parameter model based on a flat Universe ($\Omega_k=0$), dominated by a cosmological constant related to the dark energy ($\Lambda$) and cold dark matter (CDM), with initial gaussian, adiabatic fluctuations seeded by cosmic inflation. The six basic cosmological parameters are the baryon density (Baryon density times $h^2$, where $H_0 = 100 h $ km s$^{-1}$ Mpc$^{-1}$.) ($\Omega_b h^2$), the cold dark matter density ($\Omega_c h^2$), an approximation to the observed angular size of the sound horizon at recombination ($\theta_{MC}$), the energy density perturbations amplitude ($A_s$) at $k=0.05$ Mpc$^{-1}$, the spectral index of the corresponding power law ($n_s$) describing the size distribution of the primordial fluctuations, and the ionization optical depth ($\tau$). The latter is related to the probability that a given microwave photon scatters with ionized electrons in the interstellar medium. There are other parameters, such as the tensor-to-scalar ratio ($r$), whose values can be constrained experimentally. The $r$ parameter is typically defined as the ratio of the primordial power in density perturbations to that in tensor perturbations. We will assume $r=0$ hereinafter. Last measurements provided by Planck revealed an upper limit of $r<0.044$ at a $95\%$ confidence level [Tristram, 2017]. Observations have also confirmed the flatness of the Universe ($\Omega_k = 0$) and they have determined the Universe's energy content. We know that the Universe is in accelerated expansion at present $\Omega_{\Lambda} \sim 0.69$, with a matter content of only $\Omega_m \sim 0.31$ and a negligible contribution from radiation $\Omega_{rad}\approx 10^{-4}$. Table $1$ summarizes the latest cosmological parameters from Planck [PlanckVI, 2018].
| Parameter | Planck data |
|---|---|
| Fit parameters: | |
| $\Omega_b h^2$ | 0.02242 $\pm$ 0.00014 |
| $\Omega_c h^2$ | 0.11933 $\pm$ 0.00091 |
| 100$ \cdot \theta_{MC}$ | 1.04101 $\pm$ 0.00029 |
| $\tau$ | 0.0561 $\pm$ 0.0071 |
| $n_s$ | 0.9665 $\pm$ 0.0038 |
| Derived parameters: | |
| $H_0$ | (67.66 $\pm$ 0.42) km s$^{-1}$ Mpc$^{-1}$ |
| $\Omega_{\Lambda}$ | 0.6889 $\pm$ 0.0056 |
| $\Omega_m$ | 0.3111 $\pm$ 0.0056 |
| Age | (13.787 $\pm$ 0.020) Gyr |
The CMB anisotropies are strongly tied to the existence of the large-scale structures (LSS) we observe in the present, e.g., galaxies, galaxy clusters, superclusters and beyond. These were seeded by spatial density fluctuations in the early Universe which appear in the CMB anisotropies (Notice that the locations with matter overdensities (under-densities) show excess (lack) of temperature.). Therefore LSS observations may also confirm the existence of primordial fluctuations in the early Universe. These primary CMB anisotropies are due to the gravitational redshift at large angular scales, and due to the evolution of the primordial photon-baryon evolution under gravity and Compton scattering at lower scales. Not only that, but we need to consider the fact that photons interact with cosmic structures in their way towards us. This effect is accounted in the secondary CMB anisotropies which can be classified into two types of interactions: (i) gravitational effects like the gravitational lensing or the integrated Sachs-Wolfe (ISW) effect; (ii) scattering effects such us the Sunyaev-Zel'dovich (SZ) effect, caused by inverse Compton interaction between photons and free electrons.
Unless otherwise stated, we have been using these parameters to generate the CMB spectra. To do this we make use of the package $\verb+CAMB+$ [Lewis, 2011], which solves the Boltzmann equations for the early Universe matter content.
In this project we will analyse temperature and polarization maps trying to reduce the CMB noise at large scales. We can use the high precission available on the cosmological information and also we can make estimations on the comming up observations with LiteBIRD [Hamuzi, 2019].
The CMB anisotropies contain a lot of information on the statistical properties of the initial perturbations and the energy and matter content that governs the evolution of the Universe among others. All this information is encoded in $\textbf{the angular power spectrum of CMB anisotropies}$ which is a key observable and is described below [Challinor, 2012], [Samtleben, 2007].
Since CMB is generated by random fluctuations, we can only predict its statistical properties as a function of angular size. The temperature fluctuations we are going to analyse are projected in a 2D spherical surface and the most common description is given by expanding the temperature field using spherical harmonics. We are interested in the deviations from the average temperature ($T_0$), and in general we will the work with the dimensionless quantity that follows:
$\begin{equation} \Theta(\vec{x},\eta,\theta, \phi) = \sum^{\infty}_{\ell=1} \sum^{\ell}_{m=-\ell} a_{\ell m}(\vec{x}, \eta) Y_{\ell m}(\theta,\phi) = \frac{T-T_0}{T_0}(\vec{x}, \eta, \theta, \phi), \label{eq:Theta_sph_harm}\tag{1.1} \end{equation}$
Although this is defined at every point in space and time, we can observe it only here (at $\vec{x}_0 \equiv 0$) and now (at $\eta_0 \equiv 0$), where we can take this coordinates to be at the origin without loss of generality. This means that the only dependence that will be relevant in our anisotropies observations are the sky polar coordinates ($\theta$, $\phi$). The spherical harmonic functions are defined as:
$\begin{equation} Y_{\ell m} (\theta, \phi) = \sqrt{\frac{2\ell+1}{4 \pi} \frac{(\ell-m)!}{(\ell+m)!}} P^m_{\ell} (\cos \theta) e^{im\phi}. \label{eq:intro_sph_harm} \tag{1.2} \end{equation}$
The multipole describes its characteristic angular size ($\ell \in \mathbb{Z}^+$), the order $m$ describes the angular orientations of a fluctuation mode ($-\ell \leq m \leq \ell$) and $P^m_{\ell} $ are the Legendre polynomials. These functions form a complete orthonormal set on the unit sphere as defined in the previous equation \eqref{eq:intro_sph_harm}. This property can be derived from its normalization which can be written as:
$\begin{equation} \int_{\Omega} d\Omega\ Y_{\ell m }(\theta, \phi) Y ^*_{\ell' m' }(\theta, \phi) = \delta_{\ell \ell'} \delta_{m m '}, \label{eq:normalization harmonics}\tag{1.3} \end{equation}$
where $\Omega$ is the solid angle spanned by $(\theta, \phi)$. We make use of this property to relate the observables $a_{\ell m}$'s to the $\Theta_{\ell}$. We can invert $\eqref{eq:Theta_sph_harm}$ equation which shows the expansion of $\Theta$ in terms of the spherical harmonics by including $\eqref{eq:normalization harmonics}$ equation and integrating:
$\begin{equation} a_{\ell m} = \int_{\Omega} d\Omega\ \Theta(\theta, \phi) Y^*_{\ell m}(\theta, \phi) \label{eq:alm}\tag{1.4}, \end{equation}$
We can only extract information of the distribution from which these $a_{\ell m}$ are drawn. The mean value of all $a_{\ell m}$'s is zero ($\langle a_{\ell m} \rangle = 0$). If we assume an isotropic field we can determine the $\textbf{angular power spectrum}$ ($C_{\ell} $) of these fluctuations as the variance of the harmonic coefficients,
$\begin{equation} \langle a_{\ell m} a_{\ell'm'}^* \rangle = C_{\ell} \delta_{\ell \ell'} \delta_{mm'}, \tag{1.5} \end{equation}$
where the brackets denote an ensemble average over skies of the same cosmology. According to most theories, the CMB should be statistically isotropic, with perturbations that can be approximated as Gaussian. In this case, all the cosmological information is contained in these $C_{\ell} $ coefficients. Recalling the fact that the $a_{\ell m}$'s coefficients are drawn from the same distribution, for a given $\ell$, each $a_{\ell m}$ has the same variance. As we can only measure $(2\ell +1)$ independent $m$-modes, we can estimate this power spectrum from the obtained maps as:
$\begin{equation} C_{\ell} = \frac{1}{2\ell+1} \sum^{\ell}_{m=-\ell} |a_{\ell m}|^2. \tag{1.6} \end{equation}$
which has associated an uncertainty denominated "cosmic variance" and can be determined by:
$\begin{equation} \sigma^2\left( C_{\ell}\right) = \sqrt{\frac{2}{(2\ell +1)}} \cdot C_{\ell}^2. \label{eq:cosmic variance TH}\tag{1.7} \end{equation}$
The cosmic variance is the most fundamental and inevitable source of error in the measurement of the CMB power spectra. The quality of the average value estimation depends significantly on the sample size, the larger the later the closer to the actual value the estimation is. Figure 3 shows the $D_{\ell} \equiv \ell(\ell +1)/(2 \pi) \cdot C_{\ell}$, where $C_{\ell} $ is the original power spectrum, of the temperature measured with Planck and we can observe that the lower multipoles have larger uncertainties:

Figure 3: Planck 2018 temperature power spectrum [PlanckVI, 2018] . The solid line represents the theoretical model based on $\Lambda$CDM and the residuals with respect to the best fit are shown in the lower panel.
The $\ell = 0$ term, the average CMB temperature, and the $\ell =1$ term, the Doppler shift dominated by the motion of the Earth relative to the CMB, are usually removed for CMB analyses. Thus, the $\ell = 2$ term is the first one different from zero used. The region below $\ell \approx 20$ in Figure $3$ is related to the primordial energy perturbation. At high $\ell$ values, we find the acoustic oscillations ($100 \lesssim \ell \lesssim 1000$): (i) first peak ($\ell \sim 200$) reveals that the universe is close to spatially flat; (ii) the relative difference between the even and odds peaks tell us information about the amount of baryonic matter (on the second peak ($\ell \sim 500$); (iii) from the third peak ($\ell \sim 800$) to the damping tail we can obtain information of the dark matter and will provide consistency checks of underlying assumptions [Samtleben, 2007].
The CMB radiation is also linearly polarized and the measurement of this polarization is an important part of the current CMB research. The CMB polarization is generated by Thomson scattering of anisotropic radiation. This mechanism produces a fractional polarization of about $10\%$.
For a quasi-monochromatic electromagnetic wave propagating in the direction $\hat{n}$, with arbitrary polarization, we can define [Kamionkowski, 1997]:
\begin{equation} \text{E}_i = a_i \cdot cos\left(\omega_0t - \theta_i(t)\right) \quad \text{with} \quad i=1 ,2 \quad , \tag{1.8} \end{equation}where E$_i$ is the electric field (Note the difference between the electric field and the E-mode of polarization we will later define.) in the direction of the unit vector $\hat{e_i}$ which forms an orthonormal basis set with $\hat{n}$. The Stokes' parameters are defined as:
$\begin{align} I \equiv T = \langle |a_1|^2 \rangle + \langle |a_2|^2 \rangle, \tag{1.9}\\ Q = \langle |a_1|^2 \rangle - \langle |a_2|^2 \rangle, \tag{1.10}\\ U = \langle a_1 a_2 \cos(\theta_1 - \theta_2) \rangle, \tag{1.11}\\ V = \langle a_1 a_2 \sin(\theta_1 - \theta_2) \rangle . \tag{1.12} \end{align}$
The parameter $T$ describes the absolute intensity, $V$ measures the circular polarization and it is expected to be zero for the CMB, since Thomson scattering only induces linear polarization. At last, $Q$ and $U$ measure linear polarization and are used to parametrize the CMB polarization.
There are cases in which it may be useful to characterize polarization in different ways. The mostintuitive and physical decomposition is a geometrical one, using the so-called the $E$- and $B$-modes,that are defined as linear combinations of $Q$ and $U$ parameters as:
$\begin{equation} (Q \pm iU)(\hat{n}) = \sum^{\infty}_{\ell=2} \sum^{+\ell}_{m=-\ell} a^{\pm 2}_{\ell m \pm 2} Y_{\ell m} (\hat{n}) = \sum^{\infty}_{\ell=2} \sum^{+\ell}_{m=-\ell} (a^E_{\ell m} \pm i a^B_{\ell m})_{\pm 2} Y_{\ell m} (\hat{n}) , \tag{1.13} \end{equation}$
where the $E$ and $B$ modes are defined by:
$\begin{equation} a^E_{\ell m} = \frac{1}{2}(a^{+2}_{\ell m} + a^{-2}_{\ell m}) \qquad , \quad a^B_{\ell m} = \frac{-i}{2}(a^{+2}_{\ell m} - a^{-2}_{\ell m}). \tag{1.14} \end{equation}$
As mentioned before for the temperature anisotropies, a scalar field on the sphere can be expanded in spherical harmonics, $Y_{\ell m}(\theta, \phi)$, which form a complete and orthonormal basis. For the polarization we need to introduce another sets of functions to expand spin-s functions, the so called spin-weighted spherical spherical harmonics $_sY_{\ell m}$.These functions satisfy the same completeness and orthogonality relations,
$\begin{align} \int^{2 \pi}_0 d\phi \int^1_{-1} d\cos \theta\ _sY_{\ell' m'}^*(\theta, \phi)\ _sY_{\ell m}(\theta, \phi) = \delta_{\ell' \ell} \delta_{m' m} \ , \tag{1.15}\\ _sY_{\ell m}^* = (-1)^s\ _{-s}Y_{\ell -m}\ . \tag{1.16} \end{align}$
To describe the CMB polarization we have particularized these functions to the $s=2$ case. It is important to note that these $E$- and $B$-modes are independent of the coordinate system chosen, contrary to $Q$ and $U$. Not only that, but the primordial $E$-mode is a scalar magnitude generated by the initial energy density and tensor fluctuations (being the latter subdominant shown by current limits on $r$), while the primordial $B$-mode has only tensorial origin, which is related with the metric perturbations and the possible existence of primordial gravitational waves. Besides, gravitational lensing also transforms scalar perturbations from the $E$-modes to the $B$-modes. The important parameter for determining the strength of $B$-mode polarization is $r$, the tensor-to-scalar ratio. Moreover, we can define real space, spin-0 functions from these coefficients:
$\begin{equation} E_{\ell} (\theta, \phi) = \sum_m a^E_{\ell m} Y_{\ell m}(\theta,\phi) \quad , \quad B_{\ell} (\theta, \phi) = \sum_m a^B_{\ell m} Y_{\ell m}(\theta,\phi). \tag{1.17} \end{equation}$
where it is found that $E$ has even, $(-1)^\ell$, parity (Temperature has also even parity.) and $B$ has odd, $(-1)^{\ell+1}$, parity.
With this we can define a set of six power spectra, $C_{\ell} ^{TT}$, $C_{\ell} ^{EE}$, $C_{\ell} ^{BB}$, $C_{\ell} ^{TE}$, $C_{\ell} ^{EB}$ and $C_{\ell} ^{TB}$ as:
$\begin{equation} C_{\ell}^{XY} = \langle a^X_{\ell m}\left(a^Y_{\ell m}\right)^*\rangle = \frac{1}{2\ell +1} \sum_{m=-\ell}^{\ell} a^X_{\ell m}a^{Y*}_{\ell m} \quad , \ \text{where}\quad X,Y \in \{T,E,B\}. \tag{1.18} \end{equation}$
If we assume that the mechanisms involved in the formation of the CMB photons conserve parity, there will not be any coupling between $B$ and $T$ or $E$. Consequently, only four of the six combinations shown above are expected to be non-zero, and we assume $C^{EB}_{\ell} = C^{TB}_{\ell}=0$.
The central goal for the Planck mission was to extract all of the information in the CMB temperature anisotropies, allowing cosmological parameters to be determined to much higher accuracy. In order to extract cosmological information we need to measure the $C^{TT}_{\ell}$, $C^{EE}_{\ell}$, $C^{BB}_{\ell}$ and $C^{TE}_{\ell}$. Whereas scalar perturbations amplified by the expansion, described by the $\Lambda$CDM model, only produce $C^{TT}_{\ell}$, $C^{EE}_{\ell}$ and $C^{TE}_{\ell}$ spectra, the primordial gravitational waves perturbations can generate any of them. On the other hand, the expected amplitude of $C_{\ell}^{BB}$ given by the standard inflationary models is expected to be very small. Recall that the current limits in $r$ are discussed when the cosmological parameters were introduced (Table $1$) As mentioned before a $C_{\ell}^{BB}$ contribution from the $E$-mode has been also detected. The gravitational lensing effect accounts for the effect of distortion or bending of light rays that massive objects can produce when light passes near them. In this particular case, LSS produces the conversion between $E$ and $B$ modes (Note that the inverse conversion also takes place (from $B$ to $E$ mode) but is negligible in comparison.) via gravitational lensing.
# Once installed we can check the version with:
print('Using CAMB %s installed at %s'%(camb.__version__,os.path.dirname(camb.__file__)))
Using CAMB 1.3.2 installed at /home/laura/Utils/miniconda3/envs/CMB/lib/python3.9/site-packages/camb
Given the Cosmological Parameters we can use $\verb+CAMB+$ software to determine the power spectra we are interested in (see more details in [Lewis]). In our case we will introduce the parameters released by [PlanckVI, 2018] summarized in Table 1.
### We set up the Cosmological parameters for CAMB from (Table 2 Plack18 (TT,TE,EE+lowE+lensing+BAO+68%limits))
## Option 1: introduce them by hand
pars = camb.set_params(ombh2=0.02242, omch2=0.1193, ns=0.9665, omk=0, thetastar=1.04119/100, lmax=2550)
# print(pars)
results = camb.get_results(pars)
## Option 2: read the parameters from a stored file
# pars=camb.read_ini(os.path.join(path,'planck_2018.ini'))
# results = camb.get_results(pars)
## We can check the obtained spectra by printing the dictionary for CAMB power spectra
powers = results.get_cmb_power_spectra(pars, CMB_unit ='muK') #muK = muK_{CMB}
for name in powers:
print(name)
total unlensed_scalar unlensed_total lensed_scalar tensor lens_potential
## Once we have all the power spectra, we can store the lensed and unlensed D_l
totCL = powers['total'] # D_l
unlensedCL = powers['unlensed_scalar']
Dls_CAMB = np.transpose(totCL) ## needed shape for healpy maps
Cls_CAMB = np.zeros(Dls_CAMB.shape)
for i in np.arange(Dls_CAMB.shape[1]):
Cls_CAMB[:,i] = Dls_CAMB[:,i] * 2 * np.pi / (i*(i+1))
# np.save(os.path.join(path,"Dls_CAMB.npy"), Dls_CAMB)
# np.save(os.path.join(path,"Cls_CAMB.npy"), Cls_CAMB)
Dls_CAMB = np.load(os.path.join(path,"Theoretical/dls_CAMB.npy")).reshape(4,2601) #l=0,1 --> 0
Cls_CAMB = np.load(os.path.join(path,"Theoretical/cls_CAMB.npy")).reshape(4,2601)
ells_CAMB = np.arange(Cls_CAMB.shape[1])
When generating the maps with $\verb+healpy+$ [Zonca, 2019] we need the $C_{\ell}$'s to have shapes (4, lmax) or (6, lmax), where first dimension represents the spectra to be computed (i.e. $C_{\ell}^{TT}$, $C_{\ell}^{EE}$, $C_{\ell}^{BB}$, $C_{\ell}^{TE}$ and optionally $C_{\ell}^{TB}$ and $C_{\ell}^{EB}$). For this purpose we transpose the $\verb+totCL+$ returned by $\verb+CAMB+$.
%matplotlib inline
fig, ax = plt.subplots(1,1, figsize = (12,12))
ax.plot(ells_CAMB[2:],totCL[2:,0], color='C0',linewidth=3) #TT lensed
ax.plot(ells_CAMB[2:],unlensedCL[2:,0],linestyle='dashed', color='C0',linewidth=3)
ax.plot(ells_CAMB[2:],totCL[2:,3], color='C1',linewidth=3) #TE lensed
ax.plot(ells_CAMB[2:],unlensedCL[2:,3],linestyle='dashed', color='C1',linewidth=3)
ax.plot(ells_CAMB[2:],totCL[2:,1], color='C2',linewidth=3) #EE lensed
ax.plot(ells_CAMB[2:],unlensedCL[2:,1],linestyle='dashed', color='C2',linewidth=3)
ax.plot(ells_CAMB[2:],totCL[2:,2], color='C3',linewidth=3) #BB lensed
ax.plot(ells_CAMB[2:],unlensedCL[2:,2],linestyle='dashed', color='C3',linewidth=3)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_title('CMB power spectra',fontweight='bold',fontsize=30)
ax.set_xlabel(r'Multipole moment $\ \ell$')
ax.set_ylabel(r'$\ell(\ell+1) \cdot C_\ell\ / \ 2\pi \quad [\mu K^2]$')
plt.annotate('TT',xy=(2.5, 1400), color='C0',fontweight='bold',fontsize=17)
plt.annotate('TE',xy=(2.5, 5), color='C1',fontweight='bold',fontsize=17)
plt.annotate('EE',xy=(2.5, 0.1), color='C2',fontweight='bold',fontsize=17)
plt.annotate('BB',xy=(60, 0.002), color='C3',fontweight='bold',fontsize=17)
ax.plot(0,0,color='C7',label='Lensed',linewidth=3)
ax.plot(0,0, linestyle='dashed',color='C7',label='Unlensed',linewidth=3)
ax.legend(loc='lower left')
ax.set_xlim([2,2000])
ax.set_ylim([10**-4,10**4]);
f, (ax, ax2) = plt.subplots(2, 1, sharex=True, figsize = (12,12))
# plot the same data on both axes
ax.plot(ells_CAMB[1000:1590],totCL[1000:1590,0], color='C0',linewidth=3)
ax.plot(ells_CAMB[1000:1590],unlensedCL[1000:1590,0],linestyle='dashed', color='C0',linewidth=3)
ax2.plot(ells_CAMB[1000:1590],totCL[1000:1590,3], color='C1',linewidth=3)
ax2.plot(ells_CAMB[1000:1590],unlensedCL[1000:1590,3],linestyle='dashed', color='C1',linewidth=3)
ax2.plot(ells_CAMB[1000:1590],totCL[1000:1590,1], color='C2',linewidth=3)
ax2.plot(ells_CAMB[1000:1590],unlensedCL[1000:1590,1],linestyle='dashed', color='C2',linewidth=3)
# zoom-in / limit the view to different portions of the data
ax.set_ylim(450,1400) # outliers only
ax.set_yscale('log')
# ax.set_xscale('log')
ax2.set_ylim(10**-1, 50) # most of the data
ax2.set_yscale('log')
# ax2.set_xscale('log')
ax.set_title('CMB power spectra',fontweight='bold')
ax2.set_xlabel(r'$\ell$')
ax2.set_ylabel(r'$\ell(\ell+1) \cdot C_l\ / \ 2\pi \quad [\mu K^2]$')
# hide the spines between ax and ax2
ax.spines['bottom'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax.xaxis.tick_top()
ax.tick_params(labeltop=False) # don't put tick labels at the top
ax2.xaxis.tick_bottom()
# This looks pretty good, and was fairly painless, but you can get that
# cut-out diagonal lines look with just a bit more work. The important
# thing to know here is that in axes coordinates, which are always
# between 0-1, spine endpoints are at these locations (0,0), (0,1),
# (1,0), and (1,1). Thus, we just need to put the diagonals in the
# appropriate corners of each of our axes, and so long as we use the
# right transform and disable clipping.
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass to plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
ax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal
ax2.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal
ax.annotate('TT',xy=(1450, 850), color='C0',fontweight='bold', fontsize=25)
ax2.annotate('TE',xy=(1450, 4), color='C1',fontweight='bold', fontsize=25)
ax2.annotate('EE',xy=(1450, 18), color='C2',fontweight='bold', fontsize=25)
ax2.plot(1500,1500,color='C7',label='Lensed',linewidth=3)
ax2.plot(1500,1500, linestyle='dashed',color='C7',label='Unlensed',linewidth=3)
ax2.legend()
plt.show()
We represent the lensed and unlensed angular power spectra and so we can notice the gravitational lensing effect (the lensed curves with smaller amplitudes due to this effect). Also it is important to notice that in our case $r = 0$, which implies that there are no tensorial modes leading to a null primordial $C_{\ell}^{BB}$. As mentioned before, the LSS induced through the gravitational lensing effect a leakage from $E$-modes to $B$-modes . We can see how this effect becomes significant at higher $\ell$ values.
The variations in the CMB temperature maps at higher multipoles ($\ell\geq 2$) are interpreted as being mostly the result of perturbations in the density of the early Universe. The theoretical models used to describe the CMB generally predict that the $a_{\ell m}$ modes are Gaussian random fields to high precision as mentioned and assumed before. However, there have been some recents observations provided by Planck, showing mild deviations from this description, often referred as "anomalies". Some of these are the lack of power in the multipole range $\ell \simeq 20-30$, the "cold spot" or the power asymmetry between hemispheres [PlanckVII, 2020], [PlanckXXIII, 2014], [Jeong, 2020]. Nevertheless, these deviations need to be confirmed with higher precision, as the available data only provide a $(2-3) \sigma$ discrepancy [PlanckVI, 2018]. As we have seen in Figure 3, the cosmic variance at large scales limits the CMB study. To reduce these errors in the measurements the idea is to use the uncorrelated or correlated information with polarization provided by CMB anisotropies. It is known that the CMB presents anisotropies in temperature and polarization that are mostly independent and that the secondary anisotropies of the CMB can be detected using the cross-correlation between the LSS and the CMB temperature fluctuations. For this reason, we will analyse temperature and polarization maps trying to reduce the CMB uncertainty at large scales.
In [Fromert and Ensslin, 2009] a method to reduce the noise by analysing the information contained in the polarization of CMB was proposed. Their objective was to study the integrated Sachs–Wolfe (ISW), and so they calculated the correlated and uncorrelated temperature maps and studied the reduction of the uncertainties. We will follow this idea, but in our case, aiming to have maps that would eventually help us to probe the origin of the CMB anomalies. We will translate the observed $E$-mode polarization maps into temperature maps using the $E$-mode angular cross-power spectrum. These temperature maps are then subtracted from the observed temperature maps, remaining the uncorrelated part of the maps which would have a smaller contribution to the uncertainty of the detected signal. The aim of the project is to obtain information from these correlated and uncorrelated temperature maps with the $E$-mode polarization (For completeness we will also study the $E$-mode polarization correlated with temperature.), and so during the project we will mostly analyse $C^{TT}_{\ell}$, $C^{EE}_{\ell}$ and $C^{TE}_{\ell}$ power spectra. We have also applied this methodology to sky maps simulations of future missions, in particular the LiteBIRD (Lite (Light) satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection) mission. Although the primary objective of this satellite is the detection of primordial gravitational waves through the footprint left on the polarized CMB $B$-modes, LiteBIRD is expected to retrieve an $E$ map with uncertainties on the order of the cosmic variance limit [Hazumi, 2020].
Once we have developed a methodology to obtain correlated and uncorrelated maps, we will use it to discern if it would allow us to detect with higher significance the mentioned anomalies. In our case, we will focus on the lack of power at large scales anomaly, which refers to the systematic reduction of the angular power spectrum measured at the low multipoles. This can be observed in Figure $3$ in the multipoles $\ell \sim (20-30)$ where we show the temperature angular power spectrum measured by Planck.
Our intention is to generate simulations of the CMB map obtained after component separation, i.e., after applying a pipeline to extract the CMB signal from the multi-frequency measured sky sig-nal. These include noise and foreground residuals, i.e., a leftover signal from the foreground emissionafter component separation. It is useful to define the measured sky signal harmonic coefficients [Kamionkowski, 1997] as:
\begin{equation} s_{\ell m}^X = \int X^{\text{map}}(\Omega) Y^*_{\ell m}(\Omega) d\Omega \approx \sum^{N_{pix}}_{j=1} \frac{4 \pi}{N_{pix}} X^{\text{map}}_j Y_{\ell m}(\theta_j, \phi_j) , \tag{2.1} \end{equation}with $X^{\text{map}}_j$ the $j$th pixel of the convenient map, located in the sky with coordinates $(\theta_j, \phi_j)$. It is important to note that these measured coefficients include noise, foregrounds residuals and beam effects, and that the observed map is the sum of the cosmological signal (CMB), noise and foregrounds residuals:
$\begin{equation} s_{\ell m} = \left[a_{\ell m}^{\text{CMB}} \cdot p_{\ell} b_{\ell} \right]+ a^{\text{Noise}}_{\ell m} + \left[ a_{\ell m}^{\text{Foreg}}\cdot b_{\ell} \right]. \label{eq:sumCls_noiseCMBForeg} \tag{2.2} \end{equation}$
Considering no correlation between the cosmological signal and the noise we can express the power spectra as:
$\begin{equation} S^{XY}_{\ell} \equiv \langle s_{\ell m}^X s_{\ell' m'}^{Y*} \rangle = \left( \bar{C}^{XY}_{\ell} \cdot p^2_{\ell} b^2_{\ell} \right) + N^{XY}_{\ell} + \left( \bar{F}^{XY}_{\ell} \cdot b_{\ell}^2 \right)\ ,\ \text{where} \ X,Y \in \{T,E,B\}, \label{eq:total_powerspectra} \tag{2.3} \end{equation}$
where $p_{\ell}$ is the pixel window function, which describes the effect that pixelated resolution has on the sky signal, and $b_{\ell} $ is the beam window, which accounts for the smearing effect related with the finite beam width of any observing instrument. We assume a Gaussian shape for the beam, $b_{\ell} = \exp \left(-\ell (\ell + 1) \sigma^2_b\right)$, where $\sigma_b = \theta_{\text{FWHM}}/\sqrt{8\cdot \ln2}$, and $\theta_{\text{FWHM}}$ is the full-width at half-maximum (FWHM) of the beam. We assume that the noise is an uncorrelated Gaussian random variable, hence we can modellize the spectra. We have also included a simplified notation: $S_{\ell}^{XY}$ is the total, $C^{XY}_{\ell } \equiv \left( \bar{C}^{XY}_{\ell} \cdot p^2_{\ell} b^2_{\ell} \right)$ refers to the CMB, $N^{XY}_{\ell}$ to the noise and $F^{XY}_{\ell} \equiv \left( \bar{F}^{XY}_{\ell} \cdot b_{\ell}^2 \right)$ to the foreground spectra.
The CMB temperature anisotropies can be separated in a correlated and an uncorrelated partwith polarization according to $\Lambda$CDM, and one can act as an uncertainty source to the other bycontributing to the cosmic variance. With regard to the anomalies studies, sometimes the relevantinformation might be encoded in either the correlated or the uncorrelated part. Thus, we areinterested in obtaining both the correlated and uncorrelated contributions independently to reducethe intrinsic sources of uncertainty and draw statistically more significant conclusions.
In practice, we will separate the temperature maps in two components: an $E$-correlated ($TcE$) and an $E$-uncorrelated ($TncE$) part. The definition of the uncorrelated temperature map and itsangular power spectrum is given in [Fromert, 2009] as:
$\begin{equation} w_{\ell}^{E} = \frac{C_{\ell}^{TE}}{C_{\ell}^{EE}}\hspace{0.5cm} , \hspace{0.5cm} a^{TncE}_{\ell m} = a^T_{\ell m} - a^E_{\ell m} \cdot \left[\frac{C_{\ell}^{TE}}{C^{EE}_{\ell}} \right] = a^T_{\ell m} - a^{TcE}_{\ell m}\ . \tag{2.4} \end{equation}$
The polarization map $a^{E}_{\ell m}$ is correlated with the temperature fluctuations via $C_{\ell}^{TE}$, and so it contains information about the temperature map. This can be translated into a correlated map of the temperature map with $(C_{\ell}^{TE}/C^{EE}_{\ell}) a^E_{\ell m}$. It is later subtracted from the observed temperature map and remains the uncorrelated temperature fluctuations, $a^{TncE}_{\ell m}$. It can be seen in [Fromert, 2009] that with this method the variance is reduced in every mode by the term $(C_{\ell}^{TE})^2/C^{EE}_{\ell}$. For completeness we will also consider the $E$-mode polarization correlated ($EcT$) and uncorrelated ($EncT$) maps with temperature, which are defined analogously as:
$\begin{equation} w_{\ell}^{T} = \frac{C_{\ell}^{TE}}{C_{\ell}^{TT}}\hspace{0.5cm} , \hspace{0.5cm} a^{EncT}_{\ell m} = a^E_{\ell m} - a^T_{\ell m} \cdot \left[\frac{C_{\ell}^{TE}}{C^{TT}_{\ell}} \right] = a^E_{\ell m} - a^{EcT}_{\ell m}\ . \tag{2.5} \end{equation}$
Summing up all this information we can express the Wiener filter and the correlated part of a given map in a general case as:
$\begin{equation} \boxed{w^X_{\ell} = \frac{C^{XY}_{\ell} }{C^{XX}_{\ell}}\quad \text{and} \quad a^{XcY}_{\ell m} = a^Y_{\ell m} w^Y_{\ell}} \quad . \label{eq:Wiener_filter} \tag{2.6} \end{equation}$
#########################################
#### Theoretical Wiener filter (CMB) ####
#########################################
# wT_th = (Cls_CAMB[3,2:]/Cls_CAMB[0,2:])
# wE_th = (Cls_CAMB[3,2:]/Cls_CAMB[1,2:])
## Save
# np.save(os.path.join(path,"wT_th.npy"),wT_th)
# np.save(os.path.join(path,"wE_th.npy"),wE_th)
WT_th = np.load(os.path.join(path,"Theoretical/wT_th.npy"))
WE_th = np.load(os.path.join(path,"Theoretical/wE_th.npy"))
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (15,5))
ax[0].plot(ells_CAMB[2:], WT_th, color='navy', linewidth=3);
# ax[0].set_title('Filter for obtaining the E correlated maps')
# plt.yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$w_T$')
ax[0].set_xlabel(r'$\ell$')
ax[1].plot(ells_CAMB[2:], WE_th, color='navy', linewidth=3);
# ax[1].set_title('Filter for obtaining the T correlated maps')
# plt.yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$w_E$')
ax[1].set_xlabel(r'$\ell$')
plt.show()
Neglecting the effects of the contaminants, we can analyse how much the variance gets reduced in the different multipoles. In the following plot we can see the CMB temperature and $E$-mode polarization power spectrum $C_{\ell}^X$, the correlated $C_{\ell}^X - (C_{\ell}^{XY})^2/C^{YY}_{\ell}$ and the uncorrelated $(C_{\ell}^{XY})^2/C^{YY}_{\ell}$ parts.
fig, ax = plt.subplots(1,2, figsize = (20,8))
# ax[0].set_title('Temperature theoretical correlated maps', fontsize='large',fontweight='bold')
ax[0].plot(l[2:],Dls_CAMB[0,2:lmax+1], color='C0', label=r'$C_{\ell}^{TT}$',linewidth=3) #TT
ax[0].fill_between(l[2:], Dls_CAMB[0,2:lmax+1] - np.sqrt(cov_mat_cosmic_variance(l[2:],Dls_CAMB[0,2:lmax+1],fsky=1)), Dls_CAMB[0,2:lmax+1] + np.sqrt(cov_mat_cosmic_variance(l[2:],Dls_CAMB[0,2:lmax+1],fsky=1)), color='C0', alpha=0.4)
ax[0].plot(l[2:],Dls_CAMB[0,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1],linestyle='dashed', color='C2',label=r'$C_{\ell}^{TT}-(C_{\ell}^{TE})^2/C_{\ell}^{EE}$',linewidth=3) #TncE
ax[0].fill_between(l[2:], (Dls_CAMB[0,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]) - np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[0,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]),fsky=1)),(Dls_CAMB[0,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]) + np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[0,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]),fsky=1)), color='C2', alpha=0.4)
ax[0].plot(l[2:],Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1], linestyle=':', color='C1',label=r'$(C_{\ell}^{TE})^2/C_{\ell}^{EE}$',linewidth=3) #TcE
ax[0].fill_between(l[2:], (Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]) - np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]),fsky=1)),(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]) + np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[1,2:lmax+1]),fsky=1)), color='C1', alpha=0.4)
ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylim([0.0,6500])
ax[0].set_xlabel(r'$\ell$')
ax[0].set_ylabel(r'$D_{\ell} \quad [\mu K^2]$')
ax[0].fill_between(l[2:], 0,0, color='gray', alpha=0.4, label='Cosmic variance')
ax[0].legend()
# ax[1].set_title('E-mode polarization theoretical correlated maps', fontsize='large',fontweight='bold')
ax[1].plot(l[2:],Dls_CAMB[1,2:lmax+1], color='C0', label=r'$C_{\ell}^{EE}$',linewidth=3) #TT
ax[1].fill_between(l[2:], Dls_CAMB[1,2:lmax+1] - np.sqrt(cov_mat_cosmic_variance(l[2:],Dls_CAMB[1,2:lmax+1],fsky=1)), Dls_CAMB[1,2:lmax+1] + np.sqrt(cov_mat_cosmic_variance(l[2:],Dls_CAMB[1,2:lmax+1],fsky=1)), color='C0', alpha=0.4)
ax[1].plot(l[2:], Dls_CAMB[1,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1],linestyle='dashed', color='C2',label=r'$C_{\ell}^{EE}-(C_{\ell}^{TE})^2/C_{\ell}^{TT}$',linewidth=3) #TncE
ax[1].fill_between(l[2:],(Dls_CAMB[1,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]) - np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[1,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]),fsky=1)), (Dls_CAMB[1,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]) + np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[1,2:lmax+1] - Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]),fsky=1)), color='C2', alpha=0.4)
ax[1].plot(l[2:],Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1], linestyle=':', color='C1',label=r'$(C_{\ell}^{TE})^2/C_{\ell}^{TT}$',linewidth=3) #TcE
ax[1].fill_between(l[2:], (Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]) - np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]),fsky=1)),(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]) + np.sqrt(cov_mat_cosmic_variance(l[2:],(Dls_CAMB[3,2:lmax+1]**2/Dls_CAMB[0,2:lmax+1]),fsky=1)), color='C1', alpha=0.4)
ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_xlabel(r'$\ell$')
ax[1].set_ylabel(r'$D_{\ell} \quad [\mu K^2]$')
ax[1].fill_between(l[2:], 0,0, color='gray', alpha=0.4, label='Cosmic variance')
ax[1].legend()
plt.show()
In a more general case where contaminants such us noise and/or foregrounds are included we can use the expression shown in [PlanckXXI, 2016]: \begin{equation} w_{\ell}^{E} = \frac{C_{\ell}^{TE} + F_{\ell}^{TE}}{C_{\ell}^{EE} + N_{\ell}^{EE} + F_{\ell}^{EE}} \hspace{1cm} \boxed{a^{TncE}_{\ell m} = a^T_{\ell m} - a^E_{\ell m} \cdot \left[\frac{C_{\ell}^{TE}}{C^{EE}_{\ell}} \right] = a^T_{\ell m} - a^{TcE}_{\ell m} } \tag{2.7} \end{equation}
\begin{equation} w_{\ell}^{T} = \frac{C_{\ell}^{TE} + F_{\ell}^{TE}}{C_{\ell}^{TT} + N_{\ell}^{TT} + F_{\ell}^{EE}} \hspace{1cm} \boxed{a^{EncT}_{\ell m} = a^E_{\ell m} - a^T_{\ell m} \cdot \left[\frac{C_{\ell}^{TE}}{C^{TT}_{\ell}} \right] = a^E_{\ell m} - a^{EcT}_{\ell m}} \tag{2.8} \end{equation}It is important to note that we need to include the pixel and Gaussian beam window functions in order to obtain theoretical filters compatible with the simulated ones. We include these functions to make simulations similar to what we expect from observations.
\begin{equation} D_{\ell}^{\text{sim}} \equiv D_{\ell}^{\text{CAMB}} \cdot (p_X \cdot p_Y)(b_X \cdot b_Y) \tag{2.9} \end{equation}######################################
## Typical noise level for LiteBIRD ##
######################################
## Noise -> Gaussian realization from known variance
# 2.6 muK arcmin -> pixel sensibility of nside = 512 (muK)
npix = hp.nside2npix(nside)
sigma_T = (2.6/np.sqrt(2)) / (Anside*(180*60/np.pi))
sigma_P = 2.6 / (Anside*(180*60/np.pi))
noise_map_T = np.random.normal(0,sigma_T,npix)
noise_map_Q = np.random.normal(0,sigma_P,npix)
noise_map_U = np.random.normal(0,sigma_P,npix)
noise_maps = [noise_map_T, noise_map_Q, noise_map_U]
# for more details in noise map generation look in section Noise simulations #
###############################################
#### Theoretical Wiener filter (CMB+Noise) ####
###############################################
# Cls_noise_T = np.ones(lmax+1)*4*np.pi*sigma_T**2/npix
# Cls_noise_P = np.ones(lmax+1)*4*np.pi*sigma_P**2/npix
# Dls_noise_T = (l*(l+1)) * Cls_noise_T / (2 * np.pi) #Noise model for Temperature
# Dls_noise_P = (l*(l+1)) * Cls_noise_P / (2 * np.pi) #Noise model for Polarization
# wT_th_noise = (Cls_CAMB[3,2:lmax+1] * fw_TEB[2:,0]*fw_TEB[2:,1] * px[0][2:]*px[1][2:]) / (Cls_CAMB[0,2:lmax+1] * fw_TEB[2:,0]**2 * px[0][2:]**2 + Cls_noise_T[2:])
# wE_th_noise = (Cls_CAMB[3,2:lmax+1] * fw_TEB[2:,0]*fw_TEB[2:,1] * px[0][2:]*px[1][2:]) / (Cls_CAMB[1,2:lmax+1] * fw_TEB[2:,1]**2 * px[1][2:]**2 + Cls_noise_P[2:])
## Save:
# np.save(os.path.join(path,"wT_th_noise.npy"), wT_th_noise)
# np.save(os.path.join(path,"wE_th_noise.npy"), wE_th_noise)
# np.save(os.path.join(path,"Dls_noise_T.npy"), Dls_noise_T)
# np.save(os.path.join(path,"Dls_noise_P.npy"), Dls_noise_P)
# np.save(os.path.join(path,"Cls_noise_T.npy"), Cls_noise_T)
# np.save(os.path.join(path,"Cls_noise_P.npy"), Cls_noise_P)
Dls_noise_T = np.load(os.path.join(path,"Theoretical/Dls_noise_T.npy"))
Dls_noise_P = np.load(os.path.join(path,"Theoretical/Dls_noise_P.npy"))
WT_th_noise = np.load(os.path.join(path,"Theoretical/wT_th_noise.npy")) #CAMB+Noise power spectrum model(LiteBIRD)
WE_th_noise = np.load(os.path.join(path,"Theoretical/wE_th_noise.npy"))
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (15,5))
# ax[0].set_title('Filter for obtaining the E correlated maps')
ax[0].plot(ells_CAMB[2:2*nside], WT_th_noise[:2*nside-2], color='navy', linewidth=3);
# ax[0].yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$w_T$')
ax[0].set_xlabel(r'$\ell$')
# ax[1].set_title('Filter for obtaining the T correlated maps')
ax[1].plot(ells_CAMB[2:2*nside], WE_th_noise[:2*nside-2], color='navy', linewidth=3);
# ax[1].yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$w_E$')
ax[1].set_xlabel(r'$\ell$')
plt.show()
We can see that the $w_{\ell}^T$ filter is order of magnitudes smaller than the $w_{\ell}^E$ filter. This is going to lead to a smaller signal of the $E$ correlated maps compared with the $T$ correlated maps. This is one of the reasons why most of the studies are centered on the second ones but we are going to obtain the four combinations of correlated maps for completeness.
## In a similar way we could be interested on an analysis using the typical noise for Planck:
# npix = hp.nside2npix(nside)
# sigma_T = np.sqrt((npix/(4*np.pi))*Cls_CAMB[0,1500]) #muK arcmin
# sigma_P = np.sqrt((npix/(4*np.pi))*Cls_CAMB[1,800]) #muK arcmin
# noise_map_T= np.random.normal(0,sigma_T,npix)
# noise_map_Q = np.random.normal(0,sigma_P,npix)
# noise_map_U = np.random.normal(0,sigma_P,npix)
# noise_maps = [noise_map_T, noise_map_Q, noise_map_U]
# Cls_noise_T = np.ones(lmax+1)*4*np.pi*sigma_T**2/npix
# Cls_noise_P = np.ones(lmax+1)*4*np.pi*sigma_P**2/npix
# Dls_noise_T = (l*(l+1)) * Cls_noise_T / (2 * np.pi) #Temperature noise model
# Dls_noise_P = (l*(l+1)) * Cls_noise_P/ (2 * np.pi) Polarization noise model
# wT_th_noise = (Cls_CAMB[3,2:lmax+1] * fw_TEB[2:,0]*fw_TEB[2:,1] * px[0][2:]*px[1][2:]) / (Cls_CAMB[0,2:lmax+1] * fw_TEB[2:,0]**2 * px[0][2:]**2 + Cls_noise_T[2:])
# wE_th_noise = (Cls_CAMB[3,2:lmax+1] * fw_TEB[2:,0]*fw_TEB[2:,1] * px[0][2:]*px[1][2:]) / (Cls_CAMB[1,2:lmax+1] * fw_TEB[2:,1]**2 * px[1][2:]**2 + Cls_noise_P[2:])
## Save:
# np.save(os.path.join(path,"wT_th_noise.npy"),wT_th_noise)
# np.save(os.path.join(path,"wE_th_noise.npy"),wE_th_noise)
# np.save(os.path.join(path,"Dls_noise_T.npy"),Dls_noise_T)
# np.save(os.path.join(path,"Dls_noise_P.npy"),Dls_noise_P)
From the theoretical cosmological model presented in the previous section we can reach a continuous description of the CMB signal. In practice, it is not feasible to measure the sky with infinite precision and so we need to introduce a partition of a spherical surface, i.e., a sky pixelization. Thus, the sky is divided into pixels on a sphere which allows to compute calculations with equal area pixels.
We will make use of the healpy package [Zonca, 2019], a Python implementation of $\verb+HEALPix+$ [Gorski, 2005], which is a standard pixelation method to handle discrete data on the sphere. This module is usually appliedin many analyses in astrophysics.

To promote the fast and accurate analysis of large full-sky data sets, $\verb+HEALPix+$ [Gorski, 2005] provides a mathematical structure capable of performing discretizations of functions over the sphere. The three main properties, depicted in Figure 4, are: (i) the sphere is hierarchically divided into curvilinear quadrilaterals; (ii) areas of all pixels at a given resolution are identical; (iii) pixels are distributedon lines of constant latitude, which help decreasing the computational time involved in harmonic analyses. The resolution of these maps is given by the $N_{side}$ parameter which is directly related to the number of pixels on a map as $N_{pix}= 12N^2_{side}$. Along the rest of the project we will use aresolution of $N_{side}= 512$.
After computing the power spectra with $\verb+CAMB+$ [Lewis, 2011] we perform a set of 100 simulations of CMB realizations with $\verb+healpy.sphtfunc.synalm+$ routine (i.e. the $a_{\ell m}^{T,E,B}$'s coefficients) and the associated temperature and $E$-mode polarization maps using $\verb+healpy.alm2map+$ [Zonca, 2019]. This results in CMB maps with the same statistical properties as the real CMB map with the same initial power spectrum. To get similar maps to the experimental ones, we have convolved the $a_{\ell m}$ of these maps by the pixel window function and smoothed them given a $\verb+fwhm=30'+$ for the Gaussian beam window (typical value expected for LiteBIRD).
#################
### CMB maps ####
#################
cls_cmb = np.empty((nbmc, 6, lmax+1),float)
dls_cmb = np.empty((nbmc, 6, lmax+1),float)
wT_cmb = np.empty((nbmc,lmax-1),float)
wE_cmb = np.empty((nbmc,lmax-1),float)
# maps_TQU = fits.PrimaryHDU()
# maps_TQU.writeto('files/maps_TQU.fits',overwrite=True)
# map_E = fits.PrimaryHDU()
# map_E.writeto('files/map_E.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# The introduced Cls need to have shapes (4, lmax+1) or (6, lmax+1) and the TEB alms coefficients are returned.
alm_cmb = hp.sphtfunc.synalm(Cls_CAMB, lmax=lmax, mmax=None, new=True, verbose=True)
# We include pixwin and Gaussian beam windows functions to get similar CMB maps as the observed ones
maps_TQU = hp.sphtfunc.alm2map(alm_cmb, nside, lmax=None, mmax=None, pixwin=True, fwhm=fwhm, sigma=None, pol=True, inplace=False, verbose=True) #TQU
map_E = hp.sphtfunc.alm2map(alm_cmb[1], nside, lmax=None, mmax=None, pixwin=True, fwhm=fwhm, sigma=None, pol=False, inplace=False, verbose=True)
# fits.append('files/maps_TQU.fits', maps_TQU)
# fits.append('files/map_E.fits', map_E)
# With the maps we compute the simulated spectra
cls_cmb[i] = hp.sphtfunc.anafast(maps_TQU, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True) #cls TEB (pixwin+fwhm)
dls_cmb[i] = (l*(l+1)) * cls_cmb[i] / (2 * np.pi)
##### Wiener filter (angular power spectra of CMB)####
wT_cmb[i] = (cls_cmb[i,3,2:]/(cls_cmb[i,0,2:]))
wE_cmb[i] = (cls_cmb[i,3,2:]/(cls_cmb[i,1,2:]))
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
100.00 % completado ******************************************************
## Save:
# np.save(os.path.join(path,"cls_cmb.npy"), cls_cmb)
# np.save(os.path.join(path,"dls_cmb.npy"), dls_cmb)
# np.save(os.path.join(path,"wT_cmb.npy"), wT_cmb)
# np.save(os.path.join(path,"wE_cmb.npy"), wE_cmb)
Cls_cmb = np.load(os.path.join(path,"Data/cls_cmb.npy")).reshape(nbmc, 6, lmax+1) #pixwin+fwmh
Dls_cmb = np.load(os.path.join(path,"Data/dls_cmb.npy")).reshape(nbmc, 6, lmax+1)
WT_cmb = np.load(os.path.join(path,"Data/wT_cmb.npy")).reshape(nbmc,lmax-1) # CMB simulated
WE_cmb = np.load(os.path.join(path,"Data/wE_cmb.npy")).reshape(nbmc,lmax-1)
We can represent one realization of these CMB maps with $\verb+healpy.mollview+$ as follows:
fig, ax = plt.subplots(ncols=2,nrows=2,figsize = (15,9))
plt.axes(ax[0,0])
hp.mollview(maps_TQU[0], unit=r'$\mu K_{CMB}$', title='T map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[1,0])
hp.mollview(maps_TQU[1], unit=r'$\mu K_{CMB}$', title='Q map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[1,1])
hp.mollview(maps_TQU[2], unit=r'$\mu K_{CMB}$', title='U map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[0,1])
hp.mollview(map_E, unit=r'$\mu K_{CMB}$', title='E map', bgcolor='white',norm='hist',hold=True)
plt.suptitle('CMB maps')
plt.show()
We can notice the difference in the order of magnitude of the signals of the maps, in the same way as with the CMB spectra represented in the following plot, the temperature is much higher than the polarization..
%matplotlib inline
fig, ax = plt.subplots(1,3, figsize = (30,8))
# ax[0].set_title('Temperature map')
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside],color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(ells_CAMB[2:2*nside], Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2 * px[0][2:2*nside]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[0].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[0,2:2*nside]-np.std(Dls_cmb, axis=0)[0,2:2*nside],np.mean(Dls_cmb, axis=0)[0,2:2*nside]+np.std(Dls_cmb, axis=0)[0,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$D_{\ell}^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend(loc='lower left')
# ax[1].set_title('E-mode polarization map')
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside],color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(ells_CAMB[2:2*nside],Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[1].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[1,2:2*nside]-np.std(Dls_cmb, axis=0)[1,2:2*nside],np.mean(Dls_cmb, axis=0)[1,2:2*nside]+np.std(Dls_cmb, axis=0)[1,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_{\ell}^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend(loc='lower left')
# ax[2].set_title('TE CMB map')
ax[2].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[3,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[2].plot(ells_CAMB[2:2*nside], Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*px[0][2:2*nside]*fw_TEB[2:2*nside,1]*px[1][2:2*nside]), color='k', linestyle='dashed', alpha=0.7, label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[2].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[3,2:2*nside]-np.std(Dls_cmb, axis=0)[3,2:2*nside],np.mean(Dls_cmb, axis=0)[3,2:2*nside]+np.std(Dls_cmb, axis=0)[3,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
# ax[2].set_yscale('log')
ax[2].set_xscale('log')
ax[2].set_ylabel(r'$D_{\ell}^{TE} \quad [\mu K^2]$')
ax[2].set_xlabel(r'$\ell$')
ax[2].legend()
plt.show()
Finally, we represent the simulated filters including the noise spectra together with the theoretical prediction.
## Plot of CMB simulated filters and theoretical prediction:
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (15,5))
# ax[0].set_title('Filter for almT with CMB spectrum')
ax[0].plot(l[2:2*nside], WT_th[:2*nside-2], color='navy', label=r'$w_{\ell}^{TH}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(WT_cmb, axis=0)[2:2*nside],color='orangered', linestyle='dashed', label=r'$\langle w_{\ell} \rangle_{100}$',alpha=0.7,linewidth=3)
# ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel('$w_T$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend()
# ax[1].set_title('Filter for almE with CMB spectrum')
ax[1].plot(l[2:2*nside], WE_th[:2*nside-2], color='navy', label=r'$w_{\ell}^{TH}$', linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(WE_cmb, axis=0)[:2*nside-2],color='orangered', linestyle='dashed', label=r'$\langle w_{\ell} \rangle_{100}$', linewidth=3)
ax[1].set_xscale('log')
ax[1].set_ylabel('$w_E$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
plt.show()
We are going to consider Gaussian instrumental noise and assume, for simplicity, that the noise has a constant variance across all sky and it is found that:
\begin{equation} N_{\ell} \equiv \langle a_{\ell m}^{Noise} a_{\ell m}^{Noise*} \rangle \Rightarrow \boxed{ N_{\ell} = \frac{4 \pi}{N_{pix}}(\sigma_{pix})^2} \label{eq:noise model} \tag{2.10} \end{equation}where $N_{pix}$ is the number of pixels in the map and $\sigma_{pix}$ is related to the sensitivity of the satellite. This formula implies that the noise has a flat power spectrum ($N_{\ell}=$ const.). There is no noise contribution to the cross-spectra as $N_{\ell}^{XY}= 0$ for $X \neq Y$ under the assumption of constant variance, with zero cross-correlation.
We can associate with $\verb+healpy+$ [Zonca, 2019] a Gaussian realization for each pixel with $\mathcal{N}(0,\sigma^2_{pix})$. Where the random variable is the noise dispersion and we define it as:
\begin{equation} \sigma^X_{pix} = \sigma^X \mu K \cdot \frac{1'}{L_{pix}} \tag{2.11} \end{equation}with $L_{pix}$ defined as the length of the side of a square pixel at the given resolution and can be estimated with the area and the number of pixels in the map as:
\begin{equation} L_{pix} \sim (A_{pix})^{1/2} \sim \sqrt{\frac{4\pi}{N_{pix}}} \ \ \text{con} \ \ N_{pix}= 12\cdot n_{side}^2 \tag{2.12} \end{equation}where $A_{pix}$ is the pixel area in stereo radians. We are considering LiteBIRD resolution and sensibility [Hazumi, 2020]: $N_{side}=512$, $\sigma^P = 2.6\ \mu$K arcmin for polarization observations and $\sigma^T = \sigma^P/\sqrt{2}$ for temperature observations. As the dispersion of the polarization noise maps is higher, the polarization maps will have larger noise contributions than temperature maps. Even so, it is important to recall that the LiteBIRD satellite is expected to recover the polarization signal with the lowest noise contribution so far.
###################
### Noise maps ####
###################
## Planck noise estimation ##
# sigma_T = np.sqrt((npix/(4*np.pi))*Cls_CAMB[0,1500])
# sigma_P = np.sqrt((npix/(4*np.pi))*Cls_CAMB[1,800])
## LiteBIRD ##
# 2.6 muK arcmin -> pixel sensibility with nside = 512 (muK)
sigma_T = (2.6/np.sqrt(2)) / (Anside*(180*60/np.pi))
sigma_P = 2.6 / (Anside*(180*60/np.pi))
cls_noise=np.zeros((nbmc, 6, lmax+1),float)
dls_noise=np.zeros((nbmc, 6, lmax+1),float)
# noise_maps = fits.PrimaryHDU()
# noise_maps.writeto('files/noise_maps.fits',overwrite=True)
# noise_map_E = fits.PrimaryHDU()
# noise_map_E.writeto('files/noise_map_E.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
noise_map_T = np.random.normal(0,sigma_T,npix)
noise_map_Q = np.random.normal(0,sigma_P,npix)
noise_map_U = np.random.normal(0,sigma_P,npix)
noise_maps = np.array([noise_map_T, noise_map_Q, noise_map_U], np.float64)
# We intoduce TQU maps and the TEB alm’s coefficients are returned if Pol=True.
alm_noise = hp.sphtfunc.map2alm(noise_maps, lmax=lmax, mmax=None, pol=True, verbose=True)
# The E-mode polarization map can be obtained from alm(E) previously obtained
noise_map_E = hp.sphtfunc.alm2map(alm_noise[1], nside, lmax=None, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True)
# fits.append('files/noise_maps.fits', noise_maps)
# fits.append('files/noise_map_E.fits', noise_map_E)
# Finally the 6 power spectra combination are computed with anafast():
cls_noise[i] = hp.sphtfunc.anafast(noise_maps, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
dls_noise[i] = (l*(l+1)) * cls_noise[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
100.00 % completado ******************************************************
# np.save(os.path.join(path,"cls_noise.npy"), cls_noise)
# np.save(os.path.join(path,"dls_noise.npy"), dls_noise)
Cls_noise = np.load(os.path.join(path,"Data/cls_noise.npy")).reshape(nbmc,6,lmax+1) #noise
Dls_noise = np.load(os.path.join(path,"Data/dls_noise.npy")).reshape(nbmc,6,lmax+1)
We can represent one realization of these noise maps with $\verb+healpy.mollview+$ as follows:
fig, ax = plt.subplots(ncols=2,nrows=2,figsize = (15,9))
plt.axes(ax[0,0])
hp.mollview(noise_map_T, unit=r'$\mu K_{CMB}$', title='T map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[1,0])
hp.mollview(noise_map_Q, unit=r'$\mu K_{CMB}$', title='Q map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[1,1])
hp.mollview(noise_map_U, unit=r'$\mu K_{CMB}$', title='U map', bgcolor='white',norm='hist',hold=True)
plt.axes(ax[0,1])
hp.mollview(noise_map_E, unit=r'$\mu K_{CMB}$', title='E map', bgcolor='white',norm='hist',hold=True)
plt.suptitle('Noise maps')
plt.show()
Now we can construct the total CMB $\&$ Noise maps. We can particularize the general angular power spectra expressions for the case where we only have uncorrelated CMB and noise. We need to recall the theoretical model of the noise spectra, where the noise power spectra is constant and proportional to $(\sigma^X)^2$ and we have:
$\begin{equation} S_{\ell}^{XY} \equiv \langle s_{\ell m}^X s_{\ell m}^{Y*} \rangle = b^2_{\ell} \bar{C}^{XY}_{\ell} + N^{XY}_{\ell} \left\{ \begin{array}{lll} S_{\ell}^{TT} = b^2_{\ell} \bar{C}^{TT}_{\ell} + N^{TT}_{\ell} \\ S_{\ell}^{EE} = b^2_{\ell} \bar{C}^{EE}_{\ell} + N^{EE}_{\ell} \\ S_{\ell}^{TE} = b^2_{\ell} \bar{C}^{TE}_{\ell} \end{array} \right. , \label{eq:sumCls_noiseCMB} \tag{2.13} \end{equation}$
where $N^{TE}_{\ell}=0$.
##########################
#### Total Noise maps ####
##########################
cls_total = np.zeros((nbmc, 6, lmax+1),float)
dls_total = np.zeros((nbmc, 6, lmax+1),float)
wT_noise = np.empty((nbmc,lmax-1),float)
wE_noise = np.empty((nbmc,lmax-1),float)
# total_maps = fits.PrimaryHDU()
# total_maps.writeto('files/total_maps.fits',overwrite=True)
# total_map_E = fits.PrimaryHDU()
# total_map_E.writeto('files/total_map_E.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_CMB = fits.open('files/maps_TQU.fits', mode='readonly', memmap=True)
# maps_TQU = hdul_CMB[i+1].data
# hdul_CMB.close()
# hdul_noise = fits.open('files/noise_maps.fits', mode='readonly', memmap=True)
# noise_maps = hdul_noise[i+1].data
# hdul_noise.close()
# Total maps are contructed by summing the CMB and noise maps
total_maps = maps_TQU + noise_maps #TQU ##pixwin+fwhm##
# fits.append('files/total_maps.fits', total_maps)
alm_total = hp.sphtfunc.map2alm(total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #alms -> TEB
total_map_E = hp.sphtfunc.alm2map(alm_total[1], nside, lmax=None, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True)
# fits.append('files/total_map_E.fits', total_map_E)
cls_total[i] = hp.sphtfunc.anafast(total_maps, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
dls_total[i] = (l*(l+1)) * cls_total[i] / (2 * np.pi)
##### Wiener filter (angular power spectra of CMB and noise)####
wT_noise[i] = (cls_total[i,3,2:]/(cls_total[i,0,2:]))
wE_noise[i] = (cls_total[i,3,2:]/(cls_total[i,1,2:]))
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
100.00 % completado ******************************************************
## Save:
# np.save(os.path.join(path,"cls_total.npy"), cls_total)
# np.save(os.path.join(path,"dls_total.npy"), dls_total)
# np.save(os.path.join(path,"wT_noise.npy"), wT_noise) #CMB+Noise
# np.save(os.path.join(path,"wE_noise.npy"), wE_noise)
Cls_total = np.load(os.path.join(path,"Data/cls_total.npy")).reshape(nbmc,6,lmax+1) #pixwin+fwmh+noise
Dls_total = np.load(os.path.join(path,"Data/dls_total.npy")).reshape(nbmc,6,lmax+1)
WT_noise = np.load(os.path.join(path,"Data/wT_noise.npy")).reshape(nbmc,lmax-1) #CMB+Noise simulated
WE_noise = np.load(os.path.join(path,"Data/wE_noise.npy")).reshape(nbmc,lmax-1)
We can notice that under the assumption of no correlation between the CMB and noise signals, the spectrum of the total map could be decomposed as the sum of the CMB and noise spectra (as expected). This is probed graphically in the following plot, where we represent the CMB, the noise and the sum power spectra together with the total map power spectrum.
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (20,6))
# plt.rcParams.update({'font.size': 2})
plt.rc('font', size=20) # controls default text sizes
plt.rc('axes', titlesize=23) # fontsize of the axes title
plt.rc('axes', labelsize=23) # fontsize of the x and y labels
plt.rc('xtick', labelsize=23) # fontsize of the tick labels
plt.rc('ytick', labelsize=22) # fontsize of the tick labels
plt.rc('legend', fontsize=20) # legend fontsize
plt.rc('figure', titlesize=20) # fontsize of the figure title
# ax[0].set_title('Temperature map (CMB + Noise)')
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside],color='green', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(Dls_noise, axis=0)[0,2:2*nside],color='plum', label=r'$\langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(Dls_total, axis=0)[0,2:2*nside],color='navy', label=r'$\langle T_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside]+np.mean(Dls_noise, axis=0)[0,2:2*nside],linestyle='dashed',color='cyan', label=r'$\langle C_{\ell} \rangle_{100} + \langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[0].set_yscale('log')
# ax[0].set_xscale('log')
ax[0].set_ylabel(r'$D_\ell^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend()
# ax[1].set_title('E-mode polarization map (CMB + Noise)')
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside],color='green', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(Dls_noise, axis=0)[1,2:2*nside],color='plum', label=r'$\langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(Dls_total, axis=0)[1,2:2*nside],color='navy', label=r'$\langle T_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside]+np.mean(Dls_noise, axis=0)[1,2:2*nside],linestyle='dashed',color='cyan', label=r'$\langle C_{\ell} \rangle_{100} + \langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[1].set_yscale('log')
# ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_\ell^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
plt.show()
Not only that, but we expect that the $T$-$E$ cross spectrum does not have noise contribution. In contrast to the temperature and $E$-mode polarization spectra where we can notice the difference when including the noise maps at high multipoles ($\ell \sim 10^3$).
%matplotlib inline
fig, ax = plt.subplots(1,3, figsize = (30,8))
# ax[0].set_title('Temperature map')
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside],color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(ells_CAMB[2:2*nside], Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2 * px[0][2:2*nside]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[0].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[0,2:2*nside]-np.std(Dls_cmb, axis=0)[0,2:2*nside],np.mean(Dls_cmb, axis=0)[0,2:2*nside]+np.std(Dls_cmb, axis=0)[0,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$D_{\ell}^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend(loc='lower left')
# ax[1].set_title('E-mode polarization map')
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside],color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(ells_CAMB[2:2*nside],Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[1].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[1,2:2*nside]-np.std(Dls_cmb, axis=0)[1,2:2*nside],np.mean(Dls_cmb, axis=0)[1,2:2*nside]+np.std(Dls_cmb, axis=0)[1,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_{\ell}^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend(loc='lower left')
# ax[2].set_title('TE CMB map')
ax[2].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[3,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[2].plot(ells_CAMB[2:2*nside], Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*px[0][2:2*nside]*fw_TEB[2:2*nside,1]*px[1][2:2*nside]), color='k', linestyle='dashed', alpha=0.7, label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[2].fill_between(l[2:2*nside],np.mean(Dls_cmb, axis=0)[3,2:2*nside]-np.std(Dls_cmb, axis=0)[3,2:2*nside],np.mean(Dls_cmb, axis=0)[3,2:2*nside]+np.std(Dls_cmb, axis=0)[3,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[2].set_xscale('log')
ax[2].set_ylabel(r'$D_{\ell}^{TE} \quad [\mu K^2]$')
ax[2].set_xlabel(r'$\ell$')
ax[2].legend()
plt.show()
Finally, we represent the simulated filters including the noise spectra together with the theoretical prediction.
## Plot of Noise filters:
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (15,5))
ax[0].plot(l[2:2*nside], (Dls_CAMB[3,2:2*nside]*px[1][2:2*nside]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*fw_TEB[2:2*nside,0])/(Dls_CAMB[0,2:2*nside]*px[0][2:2*nside]**2*fw_TEB[2:2*nside,0]**2+Dls_noise_T[2:2*nside]), color='navy', label=r'$w_{\ell}^{TH}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(WT_noise, axis=0)[:2*nside-2],color='orangered', linestyle='dashed', label=r'$\langle w_\ell \rangle_{100}$',alpha=0.7,linewidth=3)
# ax[0].set_title('Filter for almT with with CMB and Noise spectrum')
# ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel('$w_T$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend()
ax[1].plot(l[2:2*nside], (Dls_CAMB[3,2:2*nside]*px[1][2:2*nside]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*fw_TEB[2:2*nside,0])/(Dls_CAMB[1,2:2*nside]*px[1][2:2*nside]**2*fw_TEB[2:2*nside,1]**2+Dls_noise_P[2:2*nside]), color='navy', label=r'$w_\ell^{TH}$', linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(WE_noise, axis=0)[:2*nside-2],color='orangered', linestyle='dashed', label=r'$\langle w_\ell \rangle_{100}$', linewidth=3)
# ax[1].set_title('Filter for almE with CMB and Noise spectrum')
# ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel('$w_E$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
plt.show()
We can analyse the relevance of the contaminants in the Wiener filter. For this purpose we study the change with respect to the ideal filter when the contaminants are added. We can start by analysing the noise contribution that is parametrized with $n \in \left[0,1\right]$ and therefore the difference between the ideal filter and the one including noise is quantified by:
$\begin{equation} N(n) = 1 - \sum_{\ell=2}^{\ell_{max}}\frac{w_{\ell}(n)}{w_{\ell}(n=0)} \frac{1}{\ell_{max}-1} \quad \text{with} \quad w_{\ell}(n) = \frac{C^{XY}_{\ell}}{C^{XX}_{\ell} + n N^{XX}_{\ell}}\ \tag{2.14} . \end{equation}$
Cls_noise_T = np.load("files/Theoretical/Cls_noise_T.npy")
Cls_noise_P = np.load("files/Theoretical/Cls_noise_P.npy")
n = np.linspace(0,1,100)
NT = np.empty((len(n)))
NE = np.empty((len(n)))
for i in np.arange(100):
noise_T = (Cls_CAMB[3,2:lmax+1]/(Cls_CAMB[0,2:lmax+1]+n[i]*Cls_noise_T[2:]))
ideal_T = (Cls_CAMB[3,2:lmax+1]/Cls_CAMB[0,2:lmax+1])
noise_E = (Cls_CAMB[3,2:lmax+1]/(Cls_CAMB[1,2:lmax+1]+n[i]*Cls_noise_P[2:]))
ideal_E = (Cls_CAMB[3,2:lmax+1]/Cls_CAMB[1,2:lmax+1])
NT[i] = 1 - sum(noise_T/ideal_T) * (1/(lmax-1))
NE[i] = 1 - sum(noise_E/ideal_E) * (1/(lmax-1))
We plot this and it shows the low level of noise in future LiteBIRD observations. In both cases the differences between the filter when noise is included is almost negligible in all the $n$ range. However, we can notice that for the temperature maps the noise level of LiteBIRD is even smaller than for the $E$-mode polarization maps. This can be explained by recalling the fact that the noise dispersion and contribution to the CMB is smaller for the temperature and moreover the angular power spectrum $C_{\ell}^{TT}$ is order of magnitude higher than $C_{\ell}^{EE}$. We can conclude that even if we could not suppress the noise contribution the difference with respect to the ideal filter is smaller than $0.5\%$ for the $w^E_{\ell}$ filter definition and even smaller for the $w^T_{\ell}$.
## Plot of Noise filters:
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (20,6))
ax[0].plot(n, NT, linewidth=3)
ax[0].set_ylabel(r'$N_T(n)$')
ax[0].set_xlabel('n')
ax[1].plot(n, NE, linewidth=3)
ax[1].set_ylabel(r'$N_E(n)$')
ax[1].set_xlabel('n')
plt.show()
There could be some cases in which the contaminants of the sky signal have complex emissions and removing them to obtain the CMB signal is a challenging task. Generally, the need of masking a region of the sky appears when the astrophysical foregrounds are included. By convection, the masks are defined with zeros for masking the data and with ones where it is left unmasked. It should be noted that we introduce the masked maps before the foreground problem appears because we will need to mask the sky to obtain the residual foreground maps.
Although introducing the mask helps us solve the problem introduced by the foregrounds, it also raises new ones. The most noticeable is that the mask introduces correlations between the $Q$ and $U$ maps, and therefore in the $E$ and $B$ maps. Not only that, but it also may induce a coupling between the different multipoles. We need to consider all these effects when computing the power spectra of masked maps.
This reduces the number of pixels available for the analysis and consequently affects to the cosmic variance which can now be expressed as:
$\begin{equation} \sigma(C^{XX}_{\ell}) = \sqrt{\frac{2}{(2\ell + 1) \cdot f_{sky}}} C^{XX}_{\ell} \tag{2.15} \end{equation}$ with $f_{sky}$ the fraction of sky uncovered by the mask.We are considering two possibilities for obtaining the spectra of a masked map:
(1) Compute the power spectrum as usually with $\verb+healpy.anafast+$ routine. This neglects the effect of the mask on the angular power spectra. At first order, the difference is the fraction of the masked sky ($f_{sky}$), although more differences are also present (associated to the mode coupling) [Zonca, 2019].
(2) Compute the power spectrum with $\verb+NaMaster+$ (pseudo−$C_{\ell}$ estimator) which includes the corrections by computing the correlation matrix associated with the mask. For this reason we will use this second option as our power spectrum estimator for masked maps [Alsonso, 2019].
##########################################################
### Functions for computing the spectrum with NaMaster ###
##########################################################
def master(mapT,mask,b):
'''
Computes the power spectrum given an spin-0 map (Temperature), a mask and the binning scheme.
This is returned as the first [0] output as well as the NmtField (x2) and the NmtWorkspace.
'''
f0 = nmt.NmtField(mask,mapT)
wsp = nmt.NmtWorkspace()
wsp.compute_coupling_matrix(f0,f0,b)
cl_coupled = nmt.compute_coupled_cell(f0,f0)
return wsp.decouple_cell(cl_coupled),f0,wsp
def nmt_spectra(mapsTQU,mask,b):
'''
Computes all the power spectra combinations given the TQU maps, the mask an the binning scheme.
Last many time, only recommended for small computations. See master and master_cross_spectra as
they only calculate the requested spectrum and not all the combinations.
'''
ell_eff = b.get_effective_ells()
cls = np.empty((4, int(b.get_n_bands())),float)
f0 = nmt.NmtField(mask,[mapsTQU[0]])
f2 = nmt.NmtField(mask,[mapsTQU[1],mapsTQU[2]], purify_e=True)
cls[0] = nmt.compute_full_master(f0, f0, b) #TT # spin-0 x spin-0
cls[1] = nmt.compute_full_master(f2, f2, b)[0] #EE # spin-2 x spin-2
cls[2] = nmt.compute_full_master(f2, f2, b)[3] #BB # spin-2 x spin-2
cls[3] = nmt.compute_full_master(f0, f2, b)[0] #TE # spin-0 x spin-2
return cls
def master_cross_spectra(map1,map2,mask,b):
'''
Computes the cross-power spectra with NaMaster given two maps and a mask.
If we are only interested on the spectra we will need to request just for the [0] output.
The NmtField (x2) and the NmtWorkspace are also returned as outputs.
'''
ell_eff = b.get_effective_ells()
f0_1 = nmt.NmtField(mask,[map1])
f0_2 = nmt.NmtField(mask,[map2])
cls = nmt.compute_full_master(f0_1, f0_2, b) #TT # spin-0 x spin-0
wsp = nmt.NmtWorkspace()
wsp.compute_coupling_matrix(f0_1,f0_2,b)
return cls, f0_1, f0_2, wsp
To obtain even more precise results, sometimes it is necessary to perform a binning of the multipoles, mostly at the lower ones. With this, a weighted average over the multipoles in each bandpower is performed, reducing the effect of the cosmic variance of the lowest multipoles. We will use a binning scheme of 3 multipoles until $\ell = 8$, of 4 multipoles until $\ell=20$, of 5 until $\ell=90$ and the subsequent ones, one by one (Recall that the monopole ($\ell=0$) and dipole ($\ell=1$) are not considered in the analysis.).
##################################################
### Functions for computing covariace matrices ###
##################################################
def split_custom(*args):
'''
Customized binning scheme (used for the filters obtained from masked Cls)
'''
ells = np.arange(3*nside, dtype='int32') # Array of multipoles
return 2*[3] + 3*[4] + 14*[5] + (ells[-1]-1-88)*[1]
def get_binning(nside):
'''
Binning scheme with custom-made bandpowers (see split_custom).
'''
ells = np.arange(3*nside, dtype='int32') # Array of multipoles
split = split_custom()
weights = 0.2 * np.ones_like(ells) # Array of weights
weights[0:2] = 0 #l = 0,1
weights[2:] = np.hstack([[(q+0.5)/np.sum(qs+0.5) for q in qs] for qs in np.split(ells[2:],np.cumsum(split)[:-1])])
bpws = [i*np.ones(element) for i, element in enumerate(split)]
bpws.insert(0,-1*np.ones(2))
bpws = np.hstack(bpws)
return ells, bpws, weights
def interpolation(x,y,xnew):
'''
Quadratic interpolation (used for recovering the initial shape after Cls binning)
'''
f = interpolate.interp1d(x, y, kind='quadratic', fill_value="extrapolate")
try:
return f(xnew)
except ValueError:
return [f(el) for el in xnew]
def get_covariance_matrix(maps, mask, nside, b):
'''
Computes the covariance matrix given a map, a mask, the nside and the binning scheme.
For this purpose the spectrum is obtained first and also returned as an output.
When a binning scheme is introduced we need to interpolate the Cls into a coherent shape for the covariance matrix.
For better results it is recommended to included the mask apodized.
CAREFUL: NaMaster returns the Cls from multipole l=2 and so we need to include l=0,1 as 0 to get the right lenght.
'''
ell_eff = b.get_effective_ells()
cls_out,f0,w00 = master(maps,mask,b)
# Covariance evaluation
cw = nmt.NmtCovarianceWorkspace()
# This is the time-consuming operation
# Note that you only need to do this once,
# regardless of spin
cw.compute_coupling_coefficients(f0, f0, f0, f0)
n_ell = len(cls_out[0])
cls_tt = np.zeros(3*nside)
cls_tt[2:] = cls_out[0]
cls_tt_interp = interpolation(np.insert(ell_eff,[0,0],[0,1]),cls_tt,np.arange(3*nside))
covariance_matrices = nmt.gaussian_covariance(cw,
0, 0, 0, 0, # Spins of the 4 fields
[cls_tt_interp], # TT
[cls_tt_interp], # TT
[cls_tt_interp], # TT
[cls_tt_interp], # TT
w00, wb=w00).reshape([n_ell, 1,
n_ell, 1])
return cls_out[0],covariance_matrices[:, 0, :, 0]
Moreover, we apodize the mask to include a gradual transition between the zeros and ones by convolving the mask with a Gaussian function. For this purpose we compute the spectra for different degrees values in the interval $[0,10]$ and therefore the $\chi^2$ values.
hdul_cmb = fits.open('files/Data/maps_TQU.fits', mode='readonly', memmap=True)
mapsTQU = hdul_cmb[1].data
hdul_cmb.close()
apod_deg = np.linspace(1, 7, num=7)
cls_namaster_chi2 = np.empty((len(apod_deg), 4, lmax-1),float)
# for i in np.arange(len(apod_deg)):
mask = nmt.mask_apodization(hp.read_map(os.path.join(path,"gal_planck_mask_fsky60_nside512.fits"), verbose=False),
apod_deg[i], apotype="Smooth")
b = nmt.NmtBin.from_nside_linear(nside, 1)
cls_namaster_chi2[i] = nmt_spectra(mapsTQU,mask,b)
# np.save(os.path.join(path,"cls_namaster_chi2.npy"), cls_namaster_chi2)
cls_namaster_chi2 = np.load(os.path.join(path,"cls_namaster_chi2.npy"))
chi2_values = np.empty((6))
for i in np.arange(6):
plt.scatter(i+1,chi2(np.mean(Cls_cmb[:,0,2:lmax-1],axis=0), cls_namaster_chi2[int(i),0,2:], 0.6, lmax-1)[0], color='C0')
chi2_values[i] = chi2(np.mean(Cls_cmb[:,0,2:lmax-1],axis=0), cls_namaster_chi2[int(i),0,2:], 0.6, lmax-1)[0]
# print(min(chi2_values))
print('The minimum value for chi2 is found to be at ' + str(np.where(chi2_values == np.amin(chi2_values))[0]+1) + ' (degrees)' )
The minimum value for chi2 is found to be at [5] (degrees)
mask = hp.read_map("files/gal_planck_mask_fsky60_nside512.fits").astype(np.bool_) # UNSEEN; mask with healpy
mask60 = hp.read_map("files/gal_planck_mask_fsky60_nside512.fits") #Raw mask
mask60_apod = nmt.mask_apodization(mask60, 5., apotype="Smooth") #Apodized mask --> Better NaMaster spectrum
%matplotlib inline
fig, (ax1,ax2) = plt.subplots(ncols=2,figsize=(30,8))
plt.axes(ax1)
hp.mollview(mask60,title=r'Galactic plane mask with $f_{sky}=60\%$',cbar=False,bgcolor='white',cmap='Spectral_r',norm='None', hold=True)
plt.axes(ax2)
hp.mollview(mask60_apod,title=r'Apodized galactic plane mask with 5 degrees', cbar=False,cmap='Spectral_r', hold=True) # I represent the 1-map(TcE) ¿? me parece muy grande
plt.suptitle(r'Raw and apodized masks with $f_{sky}=60\%$')
plt.show()
We can also show that we recover the all sky spectrum if we compute the spectrum with $\verb+NaMaster+$ [Alsonso, 2019]. We mask the total (CMB+Noise) maps previously obtained and we compute the power spectrum with the defined $\verb+master+$ and $\verb+master_cross_spectra+$ functions.
############################################################
#### Masked TOTAL maps (CMB+Noise) --> NaMaster spectra ####
############################################################
b = nmt.NmtBin.from_nside_linear(nside, 1)
lm = np.arange(2,int(b.get_n_bands())+2)
cls_total_mask60_namaster = np.empty((nbmc,4, int(b.get_n_bands())),float)
dls_total_mask60_namaster = np.empty((nbmc,4, int(b.get_n_bands())),float)
# for i in np.arange(nbmc):
# hdul_total = fits.open('files/total_maps.fits', mode='readonly', memmap=True)
# total_maps = hdul_total[i+1].data
# hdul_total.close()
# hdul_total_E = fits.open('files/total_map_E.fits', mode='readonly', memmap=True)
# total_map_E = hdul_total_E[i+1].data
# hdul_total_E.close()
# Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
#### Power spectrum with namaster ####
cls_total_mask60_namaster[i,0] = master([mask60_total_maps[0]], mask60_apod, b)[0]
cls_total_mask60_namaster[i,1] = master([mask60_total_map_E], mask60_apod, b)[0]
cls_total_mask60_namaster[i,3] = master_cross_spectra(mask60_total_maps[0],mask60_total_map_E, mask60_apod, b)[0]
dls_total_mask60_namaster[i,0] = (lm*(lm+1)) * cls_total_mask60[i,0] / (2 * np.pi)
dls_total_mask60_namaster[i,1] = (lm*(lm+1)) * cls_total_mask60[i,1] / (2 * np.pi)
dls_total_mask60_namaster[i,3] = (lm*(lm+1)) * cls_total_mask60[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"cls_total_mask60_namaster.npy"), cls_total_mask60_namaster)
# np.save(os.path.join(path,"dls_total_mask60_namaster.npy"), dls_total_mask60_namaster)
Dls_total_mask60_namaster = np.load(os.path.join(path,"Data/dls_total_mask60_namaster.npy"))
Cls_total_mask60_namaster = np.load(os.path.join(path,"Data/cls_total_mask60_namaster.npy"))
#### THEORETICAL WIENER FILTER ALL SKY OVER MASKED CMB+NOISE MAPS ####
cls_total_mask60_anafast = np.empty((nbmc,6, lmax+1),float)
dls_total_mask60_anafast = np.empty((nbmc,6, lmax+1),float)
# for i in np.arange(nbmc):
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
#### Power spectrum with namaster ####
cls_total_mask60_anafast[i] = hp.sphtfunc.anafast(mask60_total_maps, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
dls_total_mask60_anafast[i] = ((l+1)*((l+1)+1)) * cls_total_mask60_anafast[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Data/cls_total_mask60_anafast.npy"), cls_total_mask60_anafast)
# np.save(os.path.join(path,"Data/dls_total_mask60_anafast.npy"), dls_total_mask60_anafast)
Dls_total_mask60_anafast = np.load("files/Data/dls_total_mask60_anafast.npy")
Cls_total_mask60_anafast = np.load("files/Data/cls_total_mask60_anafast.npy")
We represent now the power spectra obtained from $\textbf{full-sky}$ maps and compare it with the obtained from $\textbf{masked}$ maps computing the needed correction with $\verb+NaMaster+$. Moreover, we compare the corrected spectra with the obtained with $\verb+healpy.anafast+$ to see the effects of masking.
%matplotlib inline
fig, ax = plt.subplots(1,3, figsize = (30,8))
# ax[0].set_title('Theoretical and simulated TE map')
ax[0].plot(l[2:2*nside], np.mean(Dls_total_mask60[:,0,:2*nside-2],axis=0),color='orange', label=r'$\langle S_{\ell}^{Nmt} \rangle_{100}$',linewidth=4)
ax[0].fill_between(l[2:2*nside], np.mean(Dls_total_mask60[:,0],axis=0)[:2*nside-2]-np.std(Dls_total_mask60[:,0],axis=0)[:2*nside-2],np.mean(Dls_total_mask60[:,0],axis=0)[:2*nside-2]+np.std(Dls_total_mask60[:,0],axis=0)[:2*nside-2], color='orange', alpha=0.4)
ax[0].plot(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,0,2:2*nside],axis=0),color='cyan', label=r'$\langle S_{\ell}^{Anaf} \rangle_{100}$',linewidth=4)
ax[0].fill_between(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,0,2:2*nside],axis=0)-np.std(Dls_total_mask60_anafast[:,0,2:2*nside],axis=0),np.mean(Dls_total_mask60_anafast[:,0,2:2*nside],axis=0)+np.std(Dls_total_mask60_anafast[:,0,2:2*nside],axis=0), color='cyan', alpha=0.4)
ax[0].plot(l[2:2*nside], np.mean(Dls_total[:,0,2:2*nside],axis=0), color='k', linestyle='dashed', label=r'$\langle S_{\ell}^{Full} \rangle_{100}$',alpha=0.7,linewidth=3)
ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$D_{\ell}^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend()
# ax[1].set_title('Theoretical and simulated TE map')
ax[1].plot(l[2:2*nside], np.mean(Dls_total_mask60[:,1,:2*nside-2],axis=0),color='orange', label=r'$\langle S_{\ell}^{Nmt} \rangle_{100}$',linewidth=4)
ax[1].fill_between(l[2:2*nside], np.mean(Dls_total_mask60[:,1],axis=0)[:2*nside-2]-np.std(Dls_total_mask60[:,1],axis=0)[:2*nside-2],np.mean(Dls_total_mask60[:,1],axis=0)[:2*nside-2]+np.std(Dls_total_mask60[:,1],axis=0)[:2*nside-2], color='orange', alpha=0.4)
ax[1].plot(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,1,2:2*nside],axis=0),color='cyan', label=r'$\langle S_{\ell}^{Anaf} \rangle_{100}$',linewidth=4)
ax[1].fill_between(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,1,2:2*nside],axis=0)-np.std(Dls_total_mask60_anafast[:,1,2:2*nside],axis=0),np.mean(Dls_total_mask60_anafast[:,1,2:2*nside],axis=0)+np.std(Dls_total_mask60_anafast[:,1,2:2*nside],axis=0), color='cyan', alpha=0.4)
ax[1].plot(l[2:2*nside], np.mean(Dls_total[:,1,2:2*nside],axis=0), color='k', linestyle='dashed', label=r'$\langle S_{\ell}^{Full} \rangle_{100}$',alpha=0.7,linewidth=3)
ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_{\ell}^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
# ax[2].set_title('Theoretical and simulated TE map')
ax[2].plot(l[2:2*nside], np.mean(Dls_total_mask60[:,3,:2*nside-2],axis=0),color='orange', label=r'$\langle S_{\ell}^{Nmt} \rangle_{100}$',linewidth=4)
ax[2].fill_between(l[2:2*nside], np.mean(Dls_total_mask60[:,3],axis=0)[:2*nside-2]-np.std(Dls_total_mask60[:,3],axis=0)[:2*nside-2],np.mean(Dls_total_mask60[:,3],axis=0)[:2*nside-2]+np.std(Dls_total_mask60[:,3],axis=0)[:2*nside-2], color='orange', alpha=0.4)
ax[2].plot(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,3,2:2*nside],axis=0),color='cyan', label=r'$\langle S_{\ell}^{Anaf} \rangle_{100}$',linewidth=4)
ax[2].fill_between(l[2:2*nside], np.mean(Dls_total_mask60_anafast[:,3,2:2*nside],axis=0)-np.std(Dls_total_mask60_anafast[:,3,2:2*nside],axis=0),np.mean(Dls_total_mask60_anafast[:,3,2:2*nside],axis=0)+np.std(Dls_total_mask60_anafast[:,3,2:2*nside],axis=0), color='cyan', alpha=0.4)
ax[2].plot(l[2:2*nside], np.mean(Dls_total[:,3,2:2*nside],axis=0), color='k', linestyle='dashed', label=r'$\langle S_{\ell}^{Full} \rangle_{100}$',alpha=0.7,linewidth=3)
# ax[2].set_yscale('log')
ax[2].set_xscale('log')
ax[2].set_ylabel(r'$D_{\ell}^{TE} \quad [\mu K^2]$')
ax[2].set_xlabel(r'$\ell$')
ax[2].legend()
plt.show()
The main astrophysical foregrounds come from our Galaxy from four mechanisms: synchrotron radiation, radiation from electron-ion scattering (free-free emission), thermal dust emission and AME (Anomalous Microwave Emission) which has been theorized as possible spinning dust [Ichiki, 2014].

We generate foregrounds-only frequency maps using the $\verb+PySM+$ templates [Thorne, 2017] for thermal dust, synchrotron, AME and free-free emission $\verb+("d1","s1","a1","f1")+$. Notice that the free-free and AME emissions hardly contribute to polarization although they may be relevant for the temperature maps. Depending on the frequency we are working with, a different mechanism may be most important. In our case we generate the foreground maps according to the LiteBIRD specifications [Hazumi, 2020] . To compute these maps we need the frequency and sensitivity columns which are summarized in Table 2, leading to 22 foreground maps ($f_{\nu}$).
| Telescope | Band ID | Center Frequency [GHz] | $\mathbf{\sigma_{P,ch}}$ [$\mathbf{\mu}$K $\cdot$ arcmin] |
|---|---|---|---|
| LFT | 1 | 40 | 37.42 |
| LFT | 2 | 50 | 33.46 |
| LFT | 3 | 60 | 21.31 |
| LFT | 4 | 68 | 19.91 |
| 31.77 | |||
| LFT | 5 | 78 | 15.55 |
| 19.13 | |||
| LFT | 6 | 89 | 12.28 |
| 28.77 | |||
| LFT | 7 | 100 | 10.34 |
| MFT | 8.48 | ||
| LFT | 8 | 119 | 7.69 |
| MFT | 5.70 | ||
| LFT | 9 | 140 | 7.25 |
| MFT | 6.38 | ||
| MFT | 10 | 166 | 5.57 |
| MFT | 11 | 195 | 7.05 |
| HFT | 10.50 | ||
| HFT | 12 | 235 | 10.79 |
| HFT | 13 | 280 | 13.80 |
| HFT | 14 | 337 | 21.95 |
| HFT | 15 | 402 | 47.45 |
########################
### Foregrounds maps ###
########################
sky = pysm3.Sky(nside=nside, preset_strings=["d1", "s1", "a1", "f1"], output_unit='uK_CMB')
freqs = np.array([40,50,60,68,68,78,78,89,89,100,100,119,119,140,140,166,195,195,235,280,337,402])*u.GHz
s_P = np.array((37.42,33.46,21.31,19.91,31.77,15.55,19.13,12.28,28.77,10.34,8.48,
7.69,5.70,7.25,6.38,5.57,7.05,10.50,10.79,13.80,21.95,47.45)) #muK·arcmin
sigma_pix = s_P / (Anside*(180/np.pi)*60)
a = 1/(sum(1/(sigma_pix**2)))
w = a/(sigma_pix**2) #weights
Once we have generated the foreground maps it is necessary to perform component separation analysis to get the residual contribution that will be added to the CMB temperature and $E$-mode polarization maps. As the component separation analysis is outside the scope of our work we will apply some known results and approximations. For that purpose we will determine the residual foreground contribution as the linear combination of the previously obtained frequency 22 maps, considering normalized weights.
\begin{equation} f_R(x) = \sum_{\nu} f_{\nu}(t) w_{\nu} \quad \text{with} \quad w_{\nu} = \frac{a}{\sigma_{\text{P}}^2} \ , \ \sum_{\nu} w_{\nu} = 1. \tag{2.16} \end{equation}## Linear Combination 22 (frequencies) foreground maps ---> residual foreground map ##
foreg = np.zeros((3, npix),float)
for j in np.arange(len(freqs)):
f_j_maps = sky.get_emission(freqs[j])
foreg = foreg + f_j_maps * w[j]
#Foreground maps all sky
foreg_maps_TQU = hp.sphtfunc.smoothing(foreg, fwhm=fwhm, pol=True, iter=3, lmax=lmax) #beamed with fwhm=30arcmin
alm_foreg = hp.sphtfunc.map2alm(foreg_maps_TQU, lmax=lmax, mmax=None, pol=True, verbose=True) #maps are TQU. TEB alm’s.
foreg_map_E = hp.sphtfunc.alm2map(alm_foreg[1], nside, lmax=None, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True)
#Foreground maps masked with fsky=80%
mask80 = hp.read_map("files/gal_planck_mask_fsky80_nside512.fits").astype(np.bool_)
fsky_80 = np.mean(mask80)
mask80_foreg_maps_TQU = hp.ma(foreg_maps_TQU)
mask80_foreg_maps_TQU.mask = np.logical_not(mask80) #UNSEEN
mask80_foreg_map_E = hp.ma(foreg_map_E)
mask80_foreg_map_E.mask = np.logical_not(mask80) #UNSEEN
### Espectros (all sky and masked maps)
cls_foreg = hp.sphtfunc.anafast(foreg_maps_TQU, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
dls_foreg = (l*(l+1)) * cls_foreg / (2 * np.pi)
cls_foreg_mask80 = hp.sphtfunc.anafast(mask80_foreg_maps_TQU, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)/fsky_80
dls_foreg_mask80 = (l*(l+1)) * cls_foreg_mask80 / (2 * np.pi)
It is expected that, for LiteBIRD, after performing cleaning techniques the foreground contribution to the CMB temperature and $E$-mode polarization maps is negligible. According to [Errad, 2016], [Diego-Palazuelos, 2020] the predicted residual level for LiteBIRD is:
$\begin{equation} M^B_{\ell} = 1.5 \cdot 10^{-4} \ell^{-2.29}. \tag{2.17} \end{equation}$
We can assume that this level is similar for both E- and B- modes. With this we reescale the foreground residual maintaining the observed structure with:
$\begin{equation} a^{XY}_{\ell m} = \hat{a}^{XY}_{\ell m} \sqrt{\eta_B} \Rightarrow \boxed{F^{XY}_{\ell} = \hat{F}^{XY}_{\ell} \eta_B} \quad . \tag{2.18} \end{equation}$
In order to account for the possible correlations between E and B the η B parameter is defined as:
$\begin{equation} \eta_B = \left \langle \frac{M_{\ell}^B}{\tilde{F}^B_{\ell}} \right \rangle, \tag{2.19} \end{equation}$
where $\tilde{F}^B_{\ell} \equiv \tilde{F}^{BB}_{\ell}$ is obtained as the masked maps spectrum, $\tilde{F}^{XY}_{\ell} = \langle \tilde{a}^X_{\ell m} \tilde{a}^{Y*}_{\ell m}\rangle$, computed with $\verb+healpy.anafast+$. As we have seen the $\verb+healpy.anafast+$ estimation does not recover the full-sky spectra and to correct this spectrum we divide by the $f_{sky}$ factor.
MB_l = 1.5 * 10**(-4) * l**(-2.29)
etaB_l = np.mean(MB_l[2:]/cls_foreg_mask80[2,2:])
## Rescaled foreground residuals maps:
t_lm = alm_foreg[0] * np.sqrt(etaB_l)
e_lm = alm_foreg[1] * np.sqrt(etaB_l)
b_lm = alm_foreg[2] * np.sqrt(etaB_l)
foreg_maps_TQU_scaled = hp.sphtfunc.alm2map([t_lm,e_lm,b_lm], nside, lmax=lmax, pol=True)
foreg_map_E_scaled = hp.sphtfunc.alm2map(e_lm, nside, lmax=lmax,pol=False)
cls_foreg_scaled = hp.sphtfunc.anafast(foreg_maps_TQU_scaled, lmax=lmax, iter=3, pol=True)
dls_foreg_scaled = (l*(l+1)) * cls_foreg_scaled / (2 * np.pi)
# np.save(os.path.join(path,"foreg_maps_TQU_scaled.npy"), foreg_maps_TQU_scaled)
# np.save(os.path.join(path,"foreg_map_E_scaled.npy"), foreg_map_E_scaled)
# np.save(os.path.join(path,"cls_foreg_scaled.npy"), cls_foreg_scaled)
# np.save(os.path.join(path,"dls_foreg_scaled.npy"), dls_foreg_scaled)
We can represent these residual foreground maps with $\verb+healpy.mollview+$ as shown in the following plot.
fig, ax = plt.subplots(ncols=2,nrows=2,figsize = (15,9))
plt.axes(ax[0,0])
hp.mollview(foreg_maps_TQU_scaled[0], unit=r'$\mu K_{CMB}$', title='T map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,0])
hp.mollview(foreg_maps_TQU_scaled[1], unit=r'$\mu K_{CMB}$', title='Q map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,1])
hp.mollview(foreg_maps_TQU_scaled[2], unit=r'$\mu K_{CMB}$', title='U map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[0,1])
hp.mollview(foreg_map_E_scaled, unit=r'$\mu K_{CMB}$', title='E map', bgcolor='white', norm='hist',hold=True)
plt.suptitle('Foreground residual maps')
plt.show()
# We can check that the level of B-mode foregrounds spectrum is the modellize with M_B:
# --> the B-mode masked foreground map (REPENSAR EXPLICACION)
foreg_B = hp.sphtfunc.alm2map(b_lm, nside, lmax=lmax, pol=False)
masked80_foreg_B = mask80 * foreg_B
mB_l = hp.sphtfunc.anafast(masked80_foreg_B, lmax=lmax, iter=3, pol=False)
plt.plot(l, np.sqrt(mB_l)) #Power spectrum of the residuals masked map
plt.plot(l, np.sqrt(MB_l)) #Order of magnitude expected by the model
plt.ylabel(r'$\ell+(\ell+1) C_{\ell}/2 \pi\ (\mu K)$')
plt.xlabel(r'$\ell$')
plt.loglog()
plt.show()
Once we have obtained all the components we can obtain the after component separation CMB maps, henceforth CMB maps, which are composed by CMB and instrumental noise simulations and residual foreground maps.
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[1].data
hdul_total_E.close()
fig, ax = plt.subplots(ncols=2,nrows=2,figsize = (15,9))
plt.axes(ax[0,0])
hp.mollview(total_maps[0]+foreg_maps_TQU_scaled[0], unit=r'$\mu K_{CMB}$', title='T map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,0])
hp.mollview(total_maps[1]+foreg_maps_TQU_scaled[1], unit=r'$\mu K_{CMB}$', title='Q map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,1])
hp.mollview(total_maps[2]+foreg_maps_TQU_scaled[2], unit=r'$\mu K_{CMB}$', title='U map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[0,1])
hp.mollview(total_map_E + foreg_map_E_scaled[0], unit=r'$\mu K_{CMB}$', title='E map', bgcolor='white', norm='hist',hold=True)
plt.suptitle('CMB maps after component separation')
plt.show()
We can represent the spectra of these CMB maps and of each component to show graphically the different contributions.
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (20,5))
plt.rc('legend', fontsize=13) # legend fontsize
# ax[0].set_title('Temperature map (CMB + Noise + Foreg)')
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside],color='green', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(Dls_noise, axis=0)[0,2:2*nside],color='plum', label=r'$\langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], dls_foreg_scaled[0,2:2*nside],color='orange', label=r'$ F_{\ell}$',linewidth=3)
ax[0].plot(l[2:2*nside], dls_foreg_scaled[0,2:2*nside]*100,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[0].plot(l[2:2*nside], dls_foreg_scaled[0,2:2*nside]*2500,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[0].plot(l[2:2*nside], dls_foreg_scaled[0,2:2*nside]*10000,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[0].plot(l[2:2*nside], np.mean(Dls_total, axis=0)[0,2:2*nside],color='navy', label=r'$\langle S_{\ell} \rangle_{100}$',linewidth=3)
ax[0].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[0,2:2*nside]+np.mean(Dls_noise, axis=0)[0,2:2*nside],linestyle='dashed',color='cyan', label=r'$\langle C_{\ell} \rangle_{100} + \langle N_{\ell} \rangle_{100} + F_{\ell}$',linewidth=3)
ax[0].set_yscale('log')
# ax[0].set_xscale('log')
ax[0].annotate(r'$(\times 10)$',xy=(200, 10**(1)), color='C1',fontweight='bold',rotation=-15, fontsize='12')
ax[0].annotate(r'$(\times 50)$',xy=(170, 4.5*10**(2)), color='C1',fontweight='bold',rotation=-15, fontsize='12')
ax[0].annotate(r'$(\times 100)$',xy=(120, 2.25*10**(4)), color='C1',fontweight='bold',rotation=-15, fontsize='12')
ax[0].set_ylabel(r'$D_\ell^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend(loc='upper right')
# ax[1].set_title('E-mode polarization map (CMB + Noise + Foreg)')
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside],color='green', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(Dls_noise, axis=0)[1,2:2*nside],color='plum', label=r'$\langle N_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], dls_foreg_scaled[1,2:2*nside],color='orange', label=r'$ F_{\ell}$',linewidth=3)
ax[1].plot(l[2:2*nside], dls_foreg_scaled[1,2:2*nside]*100,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[1].plot(l[2:2*nside], dls_foreg_scaled[1,2:2*nside]*2500,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[1].plot(l[2:2*nside], dls_foreg_scaled[1,2:2*nside]*10000,color='orange',linewidth=3, alpha=0.8, linestyle='dashed')
ax[1].plot(l[2:2*nside], np.mean(Dls_total, axis=0)[1,2:2*nside],color='navy', label=r'$\langle S_{\ell} \rangle_{100}$',linewidth=3)
ax[1].plot(l[2:2*nside], np.mean(Dls_cmb, axis=0)[1,2:2*nside]+np.mean(Dls_noise, axis=0)[1,2:2*nside],linestyle='dashed',color='cyan', label=r'$\langle C_{\ell} \rangle_{100} + \langle N_{\ell} \rangle_{100} + F_{\ell}$',linewidth=3)
ax[1].set_yscale('log')
# ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_\ell^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
ax[1].annotate(r'$(\times 10)$',xy=(920, 5*10**(-9)), color='C1',fontweight='bold',rotation=-25, fontsize='12')
ax[1].annotate(r'$(\times 50)$',xy=(870, 5*10**(-7)), color='C1',fontweight='bold',rotation=-25, fontsize='12')
ax[1].annotate(r'$(\times 100)$',xy=(700, 5*10**(-4)), color='C1',fontweight='bold',rotation=-20, fontsize='12')
plt.show()
An analogous study as done with the noise contribution can be performed to analyse the relevance of the foregrounds. We introduce the amplitudes $f$ and $g$, which parametrize the reduction fraction of foreground residuals for $X$ and $Y$ modes respectively, reaching:
$\begin{align} F(f,g) = 1 - \sum_{\ell=2}^{\ell_{max}}\frac{w_{\ell}(f,g)}{w_{\ell}(f=g=0)} \frac{1}{\ell_{max}-1} \quad \text{with} \quad w_{\ell}(f,g) = \frac{C^{XY}_{\ell}+(fg) F^{XY}_{\ell}}{C^{XX}_{\ell} + N^{XX}_{\ell} + f^2 F^{XX}_{\ell}} \tag{2.20}. \end{align}$
We show these functions in next plot where, for simplicity, we have assumed that the reduction in the temperature and $E$-mode polarization foregrounds is given by the same parameter, $f=g$.
n = np.linspace(0,1,100)
FT = np.empty((len(n)))
FE = np.empty((len(n)))
for i in np.arange(100):
foreg_T = (Cls_CAMB[3,2:lmax+1]+n[i]**2*cls_foreg_scaled[3,2:lmax+1])/(Cls_CAMB[0,2:lmax+1]+Cls_noise_T[2:lmax+1]+n[i]**2*cls_foreg_scaled[0,2:lmax+1])
idealfg_T = (Cls_CAMB[3,2:lmax+1])/(Cls_CAMB[0,2:lmax+1]+Cls_noise_T[2:])
foreg_E = (Cls_CAMB[3,2:lmax+1]+n[i]**2*cls_foreg_scaled[3,2:lmax+1])/(Cls_CAMB[1,2:lmax+1]+Cls_noise_P[2:lmax+1]+n[i]**2*cls_foreg_scaled[1,2:lmax+1])
idealfg_E = (Cls_CAMB[3,2:lmax+1])/(Cls_CAMB[1,2:lmax+1]+Cls_noise_P[2:lmax+1])
FT[i] = 1 - sum(foreg_T/idealfg_T) * (1/(lmax-1))
FE[i] = 1 - sum(foreg_E/idealfg_E) * (1/(lmax-1))
## Plot of Foregrounds filters:
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (20,6))
ax[0].plot(n, FT, linewidth=3)
ax[0].set_ylabel(r'$F_T(n)$')
ax[0].set_xlabel('n')
ax[1].plot(n, FE, linewidth=3)
ax[1].set_ylabel(r'$F_E(n)$')
ax[1].set_xlabel('n')
plt.show()
In this case, we can notice that certainly the foreground residual level is negligible in both cases, as the difference we will introduce if we could not reduce them is of the $0.01\%$ order for the $E$-mode polarization, and even smaller for the temperature.
Finally, we will apply the raw mask to these maps to remove the galactic center as it is a highly contaminated region. The apodized mask will be introduced in later sections when computing the power spectrum with $\verb+NaMaster+$.
mask60_filter_total_maps = hp.ma(total_maps+foreg_maps_TQU_scaled)
mask60_filter_total_map_E = hp.ma(total_map_E+foreg_map_E_scaled)
mask60_filter_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_filter_total_map_E.mask = np.logical_not(mask) #UNSEEN
fig, ax = plt.subplots(ncols=2,nrows=2,figsize = (15,9))
plt.axes(ax[0,0])
hp.mollview(mask60_filter_total_maps[0], unit=r'$\mu K_{CMB}$', title='T map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,0])
hp.mollview(mask60_filter_total_maps[1], unit=r'$\mu K_{CMB}$', title='Q map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[1,1])
hp.mollview(mask60_filter_total_maps[2], unit=r'$\mu K_{CMB}$', title='U map', bgcolor='white', norm='hist',hold=True)
plt.axes(ax[0,1])
hp.mollview(mask60_filter_total_map_E, unit=r'$\mu K_{CMB}$', title='E map', bgcolor='white', norm='hist',hold=True)
plt.suptitle('Masked total maps')
plt.show()
In this section we present the correlated and uncorrelated temperature and $E$-mode maps obtained from the different data sets previously exposed. We analyse first the correlated maps obtained from CMB-only maps and later on we add contaminants in order to forecast the results that could be obtained from LiteBIRD observations. The main objective is to find an optimal way to compute the filter for either the simulations or data cases. We have worked with 100 simulations to extract conclusions. However, notice that actual observations of CMB experiments only provide information about our Universe (i.e. only one realization). In order to obtain results compatible with the observations, we have also considered the case where we only have one realization. To tackle this problem we develop a smooth method to handle the observed angular power spectra in section §3.4. The expected result is a smoothed filter in agreement with the results obtained from simulations and which will be used to get the correlated maps in a realistic scenario. Our final goal is to apply this smooth filter to obtain correlated maps of CMB maps after component separation, which include noise and foregrounds. Not only that, but we would like to determine if it is possible to extract conclusions without being conditioned by a theoretical model. This is useful for the cases in which we can not model all the received emission, for example when foreground emission is not negligible. As a final goal, this method for smoothing the measured angular power spectra would eventually allow us to extract information of the underlying cosmological model directly from the observations.
Once the filter has been computed we can apply it to the CMB maps to get the correlated and uncorrelated maps, which spectra is given by:
\begin{equation} D_{\ell}^{XcY} \simeq \frac{\left[\tilde{D}_{\ell}^{XY}\cdot (p_Xp_Y \cdot b_Xb_Y)\right]^2}{\tilde{D}_{\ell}^{YY} \cdot (p^2_Y b_Y^2)}, \quad D_{\ell}^{XncY} \simeq \tilde{D}_{\ell}^{XX} \cdot (p^2_X b_X^2) - \frac{\left[\tilde{D}_{\ell}^{XY}\cdot (p_Xp_Y b_Xb_Y)\right]^2}{\tilde{D}_{\ell}^{YY} \cdot (p^2_Y b_Y^2)}. \tag{3.1} \end{equation}#########################################
#### Theoretical Wiener filter (CMB) ####
#########################################
cls_th_cmb_corr = np.empty((nbmc, 4, lmax+1),float)
dls_th_cmb_corr = np.empty((nbmc, 4, lmax+1),float)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_TQU = fits.open('files/maps_TQU.fits', mode='readonly', memmap=True)
# maps_TQU = hdul_TQU[i+1].data
# hdul_TQU.close()
# hdul_E = fits.open('files/map_E.fits', mode='readonly', memmap=True)
# map_E = hdul_E[i+1].data
# hdul_E.close()
alm_cmb = hp.sphtfunc.map2alm(maps_TQU, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_th_cmb_ET = hp.sphtfunc.smoothalm(alm_cmb[0], beam_window=np.insert(WT_th,[0,0],1), pol=False, mmax=None, verbose=True, inplace=True)
alm_th_cmb_TE = hp.sphtfunc.smoothalm(alm_cmb[1], beam_window=np.insert(WE_th,[0,0],1), pol=False, mmax=None, verbose=True, inplace=True)
##### Correlated and uncorrelated maps ####
map_th_cmb_EcT = hp.sphtfunc.alm2map(alm_th_cmb_ET, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_th_cmb_TcE = hp.sphtfunc.alm2map(alm_th_cmb_TE, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_th_cmb_EncT = map_E - map_th_cmb_EcT
map_th_cmb_TncE = maps_TQU[0] - map_th_cmb_TcE
##### Power spectrum #####
cls_th_cmb_corr[i,0] = hp.sphtfunc.anafast(map_th_cmb_EcT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_cmb_corr[i,1] = hp.sphtfunc.anafast(map_th_cmb_TcE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_cmb_corr[i,2] = hp.sphtfunc.anafast(map_th_cmb_EncT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_cmb_corr[i,3] = hp.sphtfunc.anafast(map_th_cmb_TncE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
dls_th_cmb_corr[i,0] = (l*(l+1)) * cls_th_cmb_corr[i,0] / (2 * np.pi)
dls_th_cmb_corr[i,1] = (l*(l+1)) * cls_th_cmb_corr[i,1] / (2 * np.pi)
dls_th_cmb_corr[i,2] = (l*(l+1)) * cls_th_cmb_corr[i,2] / (2 * np.pi)
dls_th_cmb_corr[i,3] = (l*(l+1)) * cls_th_cmb_corr[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"cls_th_cmb_corr.npy"), cls_th_cmb_corr)
# np.save(os.path.join(path,"dls_th_cmb_corr.npy"), dls_th_cmb_corr)
100.00 % completado ******************************************************
#######################################
#### Simulated Wiener filter (CMB) ####
#######################################
cls_cmb_corr = np.empty((nbmc, 4, lmax+1),float)
dls_cmb_corr = np.empty((nbmc, 4, lmax+1),float)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_TQU = fits.open('files/maps_TQU.fits', mode='readonly', memmap=True)
# maps_TQU = hdul_TQU[i+1].data
# hdul_TQU.close()
# hdul_E = fits.open('files/map_E.fits', mode='readonly', memmap=True)
# map_E = hdul_E[i+1].data
# hdul_E.close()
alm_cmb = hp.sphtfunc.map2alm(maps_TQU, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_cmb_ET = hp.sphtfunc.smoothalm(alm_cmb[0], beam_window=np.insert(WT_cmb,[0,0],1,axis=1)[i], pol=False, mmax=None, verbose=True, inplace=True) ##pixwin+fwhm##
alm_cmb_TE = hp.sphtfunc.smoothalm(alm_cmb[1], beam_window=np.insert(WE_cmb,[0,0],1,axis=1)[i], pol=False, mmax=None, verbose=True, inplace=True) ##pixwin+fwhm##
##### Correlated and uncorrelated maps ####
map_cmb_EcT = hp.sphtfunc.alm2map(alm_cmb_ET, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_cmb_TcE = hp.sphtfunc.alm2map(alm_cmb_TE, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_cmb_EncT = map_E - map_cmb_EcT
map_cmb_TncE = maps_TQU[0] - map_cmb_TcE
##### Power spectrum #####
cls_cmb_corr[i,0] = hp.sphtfunc.anafast(map_cmb_EcT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_cmb_corr[i,1] = hp.sphtfunc.anafast(map_cmb_TcE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_cmb_corr[i,2] = hp.sphtfunc.anafast(map_cmb_EncT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_cmb_corr[i,3] = hp.sphtfunc.anafast(map_cmb_TncE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
dls_cmb_corr[i,0] = (l*(l+1)) * cls_cmb_corr[i,0] / (2 * np.pi)
dls_cmb_corr[i,1] = (l*(l+1)) * cls_cmb_corr[i,1] / (2 * np.pi)
dls_cmb_corr[i,2] = (l*(l+1)) * cls_cmb_corr[i,2] / (2 * np.pi)
dls_cmb_corr[i,3] = (l*(l+1)) * cls_cmb_corr[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"cls_cmb_corr.npy"), cls_cmb_corr)
# np.save(os.path.join(path,"dls_cmb_corr.npy"), dls_cmb_corr)
100.00 % completado ******************************************************
Ideally we will always choose the theoretical filter to obtain the full-sky correlated maps. In this case, from the CMB simulations in which we can also notice the propagation of the numerical error introduced with the simulated CMB spectra due to approximations made with $\verb+HEALPix+$ [Gorski, 2005] pixelation scheme.
Cls_cmb_corr = np.load(os.path.join(path,"Correlated/cls_cmb_corr.npy")).reshape(nbmc,4,lmax+1)
Dls_cmb_corr = np.load(os.path.join(path,"Correlated/dls_cmb_corr.npy")).reshape(nbmc,4,lmax+1)
Cls_th_cmb_corr = np.load("files/Correlated/cls_th_cmb_corr.npy")
Dls_th_cmb_corr = np.load("files/Correlated/dls_th_cmb_corr.npy")
### Comparisson correlated maps: Theoretical vs Simulated
%matplotlib inline
fig, ax = plt.subplots(2,2, figsize = (20,15))
# ax[0,0].set_title('Theoretical and simulated filters -> EcT map')
ax[0,0].plot(l[2:2*nside], np.mean(Dls_th_cmb_corr, axis=0)[0,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0,0].plot(l[2:2*nside],(Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[0,2:2*nside]*px[0][2:2*nside]**2*fw_TEB[2:2*nside,0]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[0,0].fill_between(l[2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[0,2:2*nside]-np.std(Dls_th_cmb_corr, axis=0)[0,2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[0,2:2*nside]+np.std(Dls_th_cmb_corr, axis=0)[0,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0,0].set_yscale('log')
ax[0,0].set_xscale('log')
ax[0,0].set_ylabel(r'$D_\ell^{EcT} \quad [\mu K^2]$')
ax[0,0].set_xlabel(r'$\ell$')
ax[0,0].legend()
# ax[0,1].set_title('Theoretical and simulated filters -> TcE map')
ax[0,1].plot(l[2:2*nside], np.mean(Dls_th_cmb_corr, axis=0)[1,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0,1].plot(l[2:2*nside],(Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[1,2:2*nside]*px[1][2:2*nside]**2*fw_TEB[2:2*nside,1]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[0,1].fill_between(l[2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[1,2:2*nside]-np.std(Dls_th_cmb_corr, axis=0)[1,2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[1,2:2*nside]+np.std(Dls_th_cmb_corr, axis=0)[1,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0,1].set_yscale('log')
ax[0,1].set_xscale('log')
ax[0,1].set_ylabel(r'$D_\ell^{TcE} \quad [\mu K^2]$')
ax[0,1].set_xlabel(r'$\ell$')
ax[0,1].legend()
# ax[1,0].set_title('Theoretical and simulated filters -> EncT map')
ax[1,0].plot(l[2:2*nside], np.mean(Dls_th_cmb_corr, axis=0)[2,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1,0].plot(l[2:2*nside], (Dls_CAMB[1,2:2*nside] * fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) - (Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[0,2:2*nside]*px[0][2:2*nside]**2*fw_TEB[2:2*nside,0]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[1,0].fill_between(l[2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[2,2:2*nside]-np.std(Dls_th_cmb_corr, axis=0)[2,2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[2,2:2*nside]+np.std(Dls_th_cmb_corr, axis=0)[2,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1,0].set_yscale('log')
ax[1,0].set_xscale('log')
ax[1,0].set_ylabel(r'$D_\ell^{EncT} \quad [\mu K^2]$')
ax[1,0].set_xlabel(r'$\ell$')
ax[1,0].legend()
# ax[1,1].set_title('Theoretical and simulated filters -> TncE map')
ax[1,1].plot(l[2:2*nside], np.mean(Dls_th_cmb_corr, axis=0)[3,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1,1].plot(l[2:2*nside], (Dls_CAMB[0,2:2*nside] * fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) - (Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[1,2:2*nside]*px[1][2:2*nside]**2*fw_TEB[2:2*nside,1]**2), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[1,1].fill_between(l[2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[3,2:2*nside]-np.std(Dls_th_cmb_corr, axis=0)[3,2:2*nside],np.mean(Dls_th_cmb_corr, axis=0)[3,2:2*nside]+np.std(Dls_th_cmb_corr, axis=0)[3,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1,1].set_yscale('log')
ax[1,1].set_xscale('log')
ax[1,1].set_ylabel(r'$D_\ell^{TncE} \quad [\mu K^2]$')
ax[1,1].set_xlabel(r'$\ell$')
ax[1,1].legend()
plt.show()
From CMB simulated maps, we obtain the correlated and uncorrelated parts for the temperature and $E$-mode polarization maps. We can see the theoretical result obtained with $\verb+CAMB+$ (dashed line), the mean over 100 simulations (solid line) and the standard deviation (shaded).
The filter obtained is applied to CMB and noise maps, and we obtain the correlated and uncorrelated maps as in the previous case. The spectra of these maps is given by:
\begin{equation} D_{\ell}^{XcY} \simeq \frac{\left[\tilde{D}_{\ell}^{XY}\cdot (p_Xp_Y \cdot b_Xb_Y)\right]^2}{\tilde{D}_{\ell}^{YY} \cdot (p^2_Y b_Y^2) + \tilde{N}^{YY}_{\ell}},\quad D_{\ell}^{XncY} \simeq \tilde{D}_{\ell}^{XX} \cdot (p^2_X b_X^2) - \frac{\left[\tilde{D}_{\ell}^{XY}\cdot (p_Xp_Y \cdot b_Xb_Y)\right]^2}{\tilde{D}_{\ell}^{YY} \cdot (p^2_Y b_Y^2) + \tilde{N}^{YY}_{\ell}}. \tag{3.2} \end{equation}###############################################
#### Theoretical Wiener filter (CMB+Noise) ####
###############################################
cls_th_total_corr = np.empty((nbmc, 4,lmax+1),float)
dls_th_total_corr = np.empty((nbmc, 4 ,lmax+1),float)
# prihdu = fits.PrimaryHDU()
# hdulist = fits.HDUList([prihdu])
# hdulist.writeto('files/corr_th_total_maps.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_total = fits.open('files/total_maps.fits', mode='readonly', memmap=True)
# total_maps = hdul_total[i+1].data
# hdul_total.close()
# hdul_total_E = fits.open('files/total_map_E.fits', mode='readonly', memmap=True)
# total_map_E = hdul_total_E[i+1].data
# hdul_total_E.close()
alm_total = hp.sphtfunc.map2alm(total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_th_total_ET = hp.sphtfunc.smoothalm(alm_total[0], beam_window=np.insert(WT_th_noise,[0,0],1), pol=False, mmax=None, verbose=True, inplace=True)
alm_th_total_TE = hp.sphtfunc.smoothalm(alm_total[1], beam_window=np.insert(WE_th_noise,[0,0],1), pol=False, mmax=None, verbose=True, inplace=True)
##### Correlated and uncorrelated maps####
map_th_total_EcT = hp.sphtfunc.alm2map(alm_th_total_ET, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_th_total_TcE = hp.sphtfunc.alm2map(alm_th_total_TE, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_th_total_EncT = total_map_E - map_th_total_EcT
map_th_total_TncE = total_maps[0] - map_th_total_TcE
# col1 = fits.Column(name='EcT', format='E', array=map_th_total_EcT)
# col2 = fits.Column(name='TcE', format='E', array=map_th_total_TcE)
# col3 = fits.Column(name='EncT', format='E', array=map_th_total_EncT)
# col4 = fits.Column(name='TncE', format='E', array=map_th_total_TncE)
# cols = fits.ColDefs([col1, col2, col3, col4])
# corr_th_total_maps = fits.BinTableHDU.from_columns(cols)
# hdulist.append(corr_th_total_maps)
##### Power spectrum #####
cls_th_total_corr[i,0] = hp.sphtfunc.anafast(map_th_total_EcT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_total_corr[i,1] = hp.sphtfunc.anafast(map_th_total_TcE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_total_corr[i,2] = hp.sphtfunc.anafast(map_th_total_EncT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_th_total_corr[i,3] = hp.sphtfunc.anafast(map_th_total_TncE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
dls_th_total_corr[i,0] = (l*(l+1)) * cls_th_total_corr[i,0] / (2 * np.pi)
dls_th_total_corr[i,1] = (l*(l+1)) * cls_th_total_corr[i,1] / (2 * np.pi)
dls_th_total_corr[i,2] = (l*(l+1)) * cls_th_total_corr[i,2] / (2 * np.pi)
dls_th_total_corr[i,3] = (l*(l+1)) * cls_th_total_corr[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"cls_th_total_corr.npy"), cls_th_total_corr)
# np.save(os.path.join(path,"dls_th_total_corr.npy"), dls_th_total_corr)
100.00 % completado ******************************************************
#############################################
#### Simulated Wiener filter (CMB+Noise) ####
#############################################
cls_total_corr = np.empty((nbmc, 4,lmax+1),float)
dls_total_corr = np.empty((nbmc, 4 ,lmax+1),float)
# prihdu = fits.PrimaryHDU()
# hdulist = fits.HDUList([prihdu])
# hdulist.writeto('files/corr_total_maps.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_total = fits.open('files/total_maps.fits', mode='readonly', memmap=True)
# total_maps = hdul_total[i+1].data
# hdul_total.close()
# hdul_total_E = fits.open('files/total_map_E.fits', mode='readonly', memmap=True)
# total_map_E = hdul_total_E[i+1].data
# hdul_total_E.close()
alm_total = hp.sphtfunc.map2alm(total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_total_ET = hp.sphtfunc.smoothalm(alm_total[0], beam_window=np.insert(WT_noise,[0,0],1,axis=1)[i], pol=False, mmax=None, verbose=True, inplace=True) ##pixwin+fwhm##
alm_total_TE = hp.sphtfunc.smoothalm(alm_total[1], beam_window=np.insert(WE_noise,[0,0],1,axis=1)[i], pol=False, mmax=None, verbose=True, inplace=True) ##pixwin+fwhm##
##### Correlated and uncorrelated maps####
map_total_EcT = hp.sphtfunc.alm2map(alm_total_ET, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_total_TcE = hp.sphtfunc.alm2map(alm_total_TE, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_total_EncT = total_map_E - map_total_EcT
map_total_TncE = total_maps[0] - map_total_TcE
# col1 = fits.Column(name='EcT', format='E', array=map_total_EcT)
# col2 = fits.Column(name='TcE', format='E', array=map_total_TcE)
# col3 = fits.Column(name='EncT', format='E', array=map_total_EncT)
# col4 = fits.Column(name='TncE', format='E', array=map_total_TncE)
# cols = fits.ColDefs([col1, col2, col3, col4])
# corr_total_maps = fits.BinTableHDU.from_columns(cols)
# hdulist.append(corr_total_maps)
##### Power spectrum #####
cls_total_corr[i,0] = hp.sphtfunc.anafast(map_total_EcT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_total_corr[i,1] = hp.sphtfunc.anafast(map_total_TcE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_total_corr[i,2] = hp.sphtfunc.anafast(map_total_EncT, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
cls_total_corr[i,3] = hp.sphtfunc.anafast(map_total_TncE, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=False)
dls_total_corr[i,0] = (l*(l+1)) * cls_total_corr[i,0] / (2 * np.pi)
dls_total_corr[i,1] = (l*(l+1)) * cls_total_corr[i,1] / (2 * np.pi)
dls_total_corr[i,2] = (l*(l+1)) * cls_total_corr[i,2] / (2 * np.pi)
dls_total_corr[i,3] = (l*(l+1)) * cls_total_corr[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"cls_total_corr.npy"), cls_total_corr)
# np.save(os.path.join(path,"dls_total_corr.npy"), dls_total_corr)
100.00 % completado ******************************************************
Cls_total_corr = np.load(os.path.join(path,"Correlated/cls_total_corr.npy"))
Dls_total_corr = np.load(os.path.join(path,"Correlated/dls_total_corr.npy"))
Cls_th_total_corr = np.load(os.path.join(path,"Correlated/cls_th_total_corr.npy"))
Dls_th_total_corr = np.load(os.path.join(path,"Correlated/dls_th_total_corr.npy"))
### Comparisson correlated maps: Theoretical vs Simulated
%matplotlib inline
fig, ax = plt.subplots(2,2, figsize = (20,15))
# ax[0,0].set_title('Theoretical and simulated filters -> EcT map')
ax[0,0].plot(l[2:2*nside], np.mean(Dls_th_total_corr, axis=0)[0,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0,0].plot(l[2:2*nside],(Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[0,2:2*nside]*px[0][2:2*nside]**2*fw_TEB[2:2*nside,0]**2 + Dls_noise_T[2:2*nside]), color='k',linestyle='dashed',alpha=0.7,label=r'$T_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[0,0].fill_between(l[2:2*nside],np.mean(Dls_th_total_corr, axis=0)[0,2:2*nside]-np.std(Dls_th_total_corr, axis=0)[0,2:2*nside],np.mean(Dls_th_total_corr, axis=0)[0,2:2*nside]+np.std(Dls_th_total_corr, axis=0)[0,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0,0].set_yscale('log')
ax[0,0].set_xscale('log')
ax[0,0].set_ylabel(r'$D_\ell^{EcT} \quad [\mu K^2]$')
ax[0,0].set_xlabel(r'$\ell$')
ax[0,0].legend()
# ax[0,1].set_title('Theoretical and simulated filters -> TcE map')
ax[0,1].plot(l[2:2*nside], np.mean(Dls_th_total_corr, axis=0)[1,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[0,1].plot(l[2:2*nside],(Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[1,2:2*nside]*px[1][2:2*nside]**2*fw_TEB[2:2*nside,1]**2 + Dls_noise_P[2:2*nside]), color='k',linestyle='dashed',alpha=0.7,label=r'$T_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[0,1].fill_between(l[2:2*nside],np.mean(Dls_th_total_corr, axis=0)[1,2:2*nside]-np.std(Dls_th_total_corr, axis=0)[1,2:2*nside],np.mean(Dls_th_total_corr, axis=0)[1,2:2*nside]+np.std(Dls_th_total_corr, axis=0)[1,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[0,1].set_yscale('log')
ax[0,1].set_xscale('log')
ax[0,1].set_ylabel(r'$D_\ell^{TcE} \quad [\mu K^2]$')
ax[0,1].set_xlabel(r'$\ell$')
ax[0,1].legend()
# ax[1,0].set_title('Theoretical and simulated filters -> EncT map')
ax[1,0].plot(l[2:2*nside], np.mean(Dls_th_total_corr, axis=0)[2,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1,0].plot(l[2:2*nside], (Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside]) - (Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[0,2:2*nside]*px[0][2:2*nside]**2*fw_TEB[2:2*nside,0]**2 + Dls_noise_T[2:2*nside]), color='k',linestyle='dashed',alpha=0.7,label=r'$T_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[1,0].fill_between(l[2:2*nside],np.mean(Dls_th_total_corr, axis=0)[2,2:2*nside]-np.std(Dls_th_total_corr, axis=0)[2,2:2*nside],np.mean(Dls_th_total_corr, axis=0)[2,2:2*nside]+np.std(Dls_th_total_corr, axis=0)[2,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1,0].set_yscale('log')
ax[1,0].set_xscale('log')
ax[1,0].set_ylabel(r'$D_\ell^{EncT} \quad [\mu K^2]$')
ax[1,0].set_xlabel(r'$\ell$')
ax[1,0].legend()
# ax[1,1].set_title('Theoretical and simulated filters -> TncE map')
ax[1,1].plot(l[2:2*nside], np.mean(Dls_th_total_corr, axis=0)[3,2:2*nside], color='orange', label=r'$\langle C_{\ell} \rangle_{100}$',linewidth=3)
ax[1,1].plot(l[2:2*nside], (Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside]) - (Dls_CAMB[3,2:2*nside]*fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1]*px[0][2:2*nside]*px[1][2:2*nside])**2 / (Dls_CAMB[1,2:2*nside]*px[1][2:2*nside]**2*fw_TEB[2:2*nside,1]**2 + Dls_noise_P[2:2*nside]), color='k',linestyle='dashed',alpha=0.7,label=r'$T_{\ell}^{TH}$',linewidth=3) #TT theoretical
ax[1,1].fill_between(l[2:2*nside],np.mean(Dls_th_total_corr, axis=0)[3,2:2*nside]-np.std(Dls_th_total_corr, axis=0)[3,2:2*nside],np.mean(Dls_th_total_corr, axis=0)[3,2:2*nside]+np.std(Dls_th_total_corr, axis=0)[3,2:2*nside], color='lightgreen', label=r'$\sigma_{100}$')
ax[1,1].set_yscale('log')
ax[1,1].set_xscale('log')
ax[1,1].set_ylabel(r'$D_\ell^{TncE} \quad [\mu K^2]$')
ax[1,1].set_xlabel(r'$\ell$')
ax[1,1].legend()
plt.show()
From CMB and noise simulated maps, we obtain the correlated and uncorrelated parts for the temperature and $E$-mode polarization maps. We can see the theoretical result obtained with $\verb+CAMB+$ (dashed line), the mean over 100 simulations (solid line) and the standard deviation (shaded).
As we have seen, when computing the power spectrum on a masked map it creates coupling between different scales. Therefore, the angular power spectrum at large scales can contaminate the angular power spectrum at smaller scales. We have chosen $\verb+NaMaster+$ as our power spectrum estimator to study the different filter calculation pipelines to obtain the optimal correlated and uncorrelated maps. We explore the difference between computing the spectra of the masked correlated and uncorrelated maps with the pseudo$-C_{\ell}$ algorithm and obtaining the spectra involved in the filter definition. In the first case, the intention is to recover the genuine angular power spectra and correct the effect of masking. However, in the second case the influence of the mask is taken into account in the angular power spectra used to calculate the filters, as it will be used to analyse masked maps. This opens two possibilities for determining the angular power spectrum of the masked maps which will constitute the filter, $\verb+healpy.anafast+$ or $\verb+NaMaster+$. Once we have the angular power spectrum for each simulation we can get the average value of them or determine the first 100 filters and at the end average them. This leads to four possible filters we need to consider. The different possibilities for the definition of the filter are summarized in the following figure.

In order to handle the masked CMB+Noise maps we need to know what filters recover the best correlated map. We will generate a new set of 100 simulations of masked maps for constructing two new filters. For the first one we will obtain the power spectrum with $\verb+healpy.anafast()+$ (from now on anafast filter) and for the second one we will compute the power spectrum with $\verb+NaMaster+$ (from now on namaster filter)
#################
### CMB maps ####
#################
filter_cls_cmb = np.empty((nbmc, 6, lmax+1),float)
filter_dls_cmb = np.empty((nbmc, 6, lmax+1),float)
# filter_maps_TQU = fits.PrimaryHDU()
# filter_maps_TQU.writeto('files/Sim_filters/filter_maps_TQU.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
filter_alm_cmb = hp.sphtfunc.synalm(Cls_CAMB, lmax=lmax, mmax=None, new=True, verbose=True) #TEB
filter_maps_TQU = hp.sphtfunc.alm2map(filter_alm_cmb, nside, lmax=None, mmax=None, pixwin=True, fwhm=fwhm, sigma=None, pol=True, inplace=False, verbose=True) #TQU
# fits.append('files/Sim_filters/filter_maps_TQU.fits', filter_maps_TQU)
filter_cls_cmb[i] = hp.sphtfunc.anafast(filter_maps_TQU, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True) #cls TEB (pixwin+fwhm)
filter_dls_cmb[i] = ((l+1)*((l+1)+1)) * filter_cls_cmb[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
100.00 % completado ******************************************************
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_cmb.npy"), filter_cls_cmb)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_cmb.npy"), filter_dls_cmb)
###################
### Noise maps ####
###################
## LiteBIRD
# 2.6 muK arcmin -> sensibilidad en un pixel de nside = 512 (muK)
sigma_T = (2.6/np.sqrt(2)) / (Anside*(180*60/np.pi))
sigma_P = 2.6 / (Anside*(180*60/np.pi))
filter_cls_noise = np.zeros((nbmc, 6, lmax+1),float)
filter_dls_noise = np.zeros((nbmc, 6, lmax+1),float)
# filter_noise_maps = fits.PrimaryHDU()
# filter_noise_maps.writeto('files/Sim_filters/filter_noise_maps.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
filter_noise_map_T = np.random.normal(0,sigma_T,npix)
filter_noise_map_Q = np.random.normal(0,sigma_P,npix)
filter_noise_map_U = np.random.normal(0,sigma_P,npix)
filter_noise_maps = np.array([filter_noise_map_T, filter_noise_map_Q, filter_noise_map_U], np.float64)
# fits.append('files/Sim_filters/filter_noise_maps.fits', filter_noise_maps)
filter_cls_noise[i] = hp.sphtfunc.anafast(filter_noise_maps, map2=None, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
filter_dls_noise[i] = (l*(l+1)) * filter_cls_noise[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
100.00 % completado ******************************************************
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_noise.npy"), filter_cls_noise)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_noise.npy"), filter_dls_noise)
###############################
### Total (CMB+Noise) maps ####
###############################
filter_cls_total = np.zeros((nbmc, 6, lmax+1),float)
filter_dls_total = np.zeros((nbmc, 6, lmax+1),float)
# filter_total_maps = fits.PrimaryHDU()
# filter_total_maps.writeto('files/Sim_filters/filter_total_maps.fits',overwrite=True)
# for i in np.arange(nbmc):
np.random.randint(low=0, high=nbmc, size=1)
# hdul_CMB = fits.open('files/Sim_filters/filter_maps_TQU.fits', mode='readonly', memmap=True)
# filter_maps_TQU = hdul_CMB[i+1].data
# hdul_CMB.close()
# hdul_noise = fits.open('files/Sim_filters/filter_noise_maps.fits', mode='readonly', memmap=True)
# filter_noise_maps = hdul_noise[i+1].data
# hdul_noise.close()
# filter_total_maps = filter_maps_TQU + filter_noise_maps #TQU ##pixwin+fwhm##
# fits.append('files/Sim_filters/filter_total_maps.fits', filter_total_maps)
filter_cls_total[i] = hp.sphtfunc.anafast(filter_total_maps, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
filter_dls_total[i] = (l*(l+1)) * filter_cls_total[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_total.npy"), filter_cls_total)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_total.npy"), filter_dls_total)
#################################################################################################################
## Set 2: 100 simulaciones CMB+Ruido (generar filtros --> filter_maps_TQU+filter_noise_maps+filter_total_maps) ##
#################################################################################################################
# ### cmb --> pixwin + fwhm=30 arcmin ###
filterCls_cmb = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_cmb.npy")).reshape(nbmc, 6, lmax+1) #pixwin+fwmh
filterDls_cmb = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_cmb.npy")).reshape(nbmc, 6, lmax+1)
# ### noise --> simulated from LiteBIRD 2.6 muKarcmin ###
filterCls_noise = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_noise.npy")).reshape(nbmc,6,lmax+1) #noise
filterDls_noise = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_noise.npy")).reshape(nbmc,6,lmax+1)
# ### total --> cmb + noise ###
filterCls_total = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_cls_total.npy")).reshape(nbmc,6,lmax+1) #pixwin+fwmh+noise
filterDls_total = np.load(os.path.join(path,"Sim_filters/CMB+Noise/filter_dls_total.npy")).reshape(nbmc,6,lmax+1)
#######################
### ANAFAST FILTER ####
#######################
filter_cls_total_mask60_anafast = np.zeros((nbmc, 6, lmax+1),float)
filter_dls_total_mask60_anafast = np.zeros((nbmc, 6, lmax+1),float)
# for i in np.arange(nbmc):
# hdul_filter_total = fits.open('files/Sim_filters/filter_total_maps.fits', mode='readonly', memmap=True)
# filter_total_maps = hdul_filter_total[i+1].data
# hdul_filter_total.close()
# Raw masking:
mask60_filter_total_maps = hp.ma(filter_total_maps)
mask60_filter_total_maps.mask = np.logical_not(mask) #UNSEEN
filter_cls_total_mask60_anafast[i] = hp.sphtfunc.anafast(mask60_filter_total_maps, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
filter_dls_total_mask60_anafast[i] = (l*(l+1)) * filter_cls_total_mask60_anafast[i] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
filterCls_total_mask60_anafast = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_anafast.npy")).reshape(nbmc,6,lmax+1)
filterDls_total_mask60_anafast = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_anafast.npy")).reshape(nbmc,6,lmax+1)
SAMF_T = np.mean(filterCls_total_mask60_anafast[:,3,2:],axis=0)/np.mean(filterCls_total_mask60_anafast[:,0,2:],axis=0)
SAMF_E = np.mean(filterCls_total_mask60_anafast[:,3,2:],axis=0)/np.mean(filterCls_total_mask60_anafast[:,1,2:],axis=0)
SAFM_T = np.mean(filterCls_total_mask60_anafast[:,3,2:]/filterCls_total_mask60_anafast[:,0,2:],axis=0)
SAFM_E = np.mean(filterCls_total_mask60_anafast[:,3,2:]/filterCls_total_mask60_anafast[:,1,2:],axis=0)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_anafast.npy"), filter_cls_total_mask60_anafast)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_anafast.npy"), filter_dls_total_mask60_anafast)
# np.save(os.path.join(path,"Correlated/SAMF_T.npy"), SAMF_T)
# np.save(os.path.join(path,"Correlated/SAMF_E.npy"), SAMF_E)
# np.save(os.path.join(path,"Correlated/SAFM_T.npy"), SAFM_T)
# np.save(os.path.join(path,"Correlated/SAFM_E.npy"), SAFM_E)
########################
### NAMASTER FILTER ####
########################
b = nmt.NmtBin.from_nside_linear(nside, 1)
lm = np.arange(2,int(b.get_n_bands())+2)
filter_cls_total_mask60_namaster = np.zeros((nbmc, 4, lmax-1),float)
filter_dls_total_mask60_namaster = np.zeros((nbmc, 4, lmax-1),float)
# for i in np.arange(nbmc):
# hdul_filter_total = fits.open('files/Sim_filters/filter_total_maps.fits', mode='readonly', memmap=True)
# filter_total_maps = hdul_filter_total[i+1].data
# hdul_filter_total.close()
alm_TEB = hp.sphtfunc.map2alm(filter_total_maps, lmax=lmax, mmax=None, verbose=True, pol=True)
filter_total_map_E = hp.sphtfunc.alm2map(alm_TEB[1], nside, lmax=lmax, mmax=None, pol=False, verbose=True)
# Raw masking
mask60_filter_total_maps = hp.ma(filter_total_maps)
mask60_filter_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_filter_total_map_E = hp.ma(filter_total_map_E)
mask60_filter_total_map_E.mask = np.logical_not(mask)
#### Power spectrum with NAMASTER ####
filter_cls_total_mask60_namaster[i,0] = master([mask60_filter_total_maps[0]], mask60_apod, b)[0]
filter_cls_total_mask60_namaster[i,1] = master([mask60_filter_total_map_E], mask60_apod, b)[0]
filter_cls_total_mask60_namaster[i,3] = master_cross_spectra(mask60_filter_total_maps[0],mask60_filter_total_map_E, mask60_apod, b)[0]
filter_dls_total_mask60_namaster[i,0] = (lm*(lm+1)) * filter_cls_total_mask60_namaster[i,0] / (2 * np.pi)
filter_dls_total_mask60_namaster[i,1] = (lm*(lm+1)) * filter_cls_total_mask60_namaster[i,1] / (2 * np.pi)
filter_dls_total_mask60_namaster[i,3] = (lm*(lm+1)) * filter_cls_total_mask60_namaster[i,3] / (2 * np.pi)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_namaster.npy"), filter_cls_total_mask60_namaster)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_namaster.npy"), filter_dls_total_mask60_namaster)
filterCls_total_mask60_namaster = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_namaster.npy")).reshape(nbmc,4,lmax-1)
filterDls_total_mask60_namaster = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_namaster.npy")).reshape(nbmc,4,lmax-1)
###############################
### NAMASTER BINNED FILTER ####
###############################
ells, bpws, weights = get_binning(nside)
bin5 = nmt.NmtBin(nside, ells=ells, bpws=bpws,weights=weights)
ell_eff = bin5.get_effective_ells()
## Bandpower info:
## Check everythong is going well
# print("Bandpower info:")
# print(" %d bandpowers" % (bin5.get_n_bands()))
# print("The columns in the following table are:")
# print(" [1]-band index, [2]-list of multipoles, "
# "[3]-list of weights, [4]=effective multipole")
# for i in range(bin5.get_n_bands()):
# print(i, bin5.get_ell_list(i), bin5.get_weight_list(i), ell_eff[i])
# print("")
# (1) NaMaster Cls previously obtained --> binning scheme --> Cls binned --> NaMaster binned filter
filter_cls_total_mask60_namaster_binned = np.empty((nbmc, 4, int(bin5.get_n_bands())),float)
filter_dls_total_mask60_namaster_binned = np.empty((nbmc, 4, int(bin5.get_n_bands())),float)
for i in np.arange(100):
for j in np.arange(4):
filter_cls_total_mask60_namaster_binned[i,j] = bin5.bin_cell(np.insert(filterCls_total_mask60_namaster[i,j],[0,0],0))
filter_dls_total_mask60_namaster_binned[i,j] = bin5.bin_cell(np.insert(filterDls_total_mask60_namaster[i,j],[0,0],0))
# Save:
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_namaster_binned.npy"), filter_cls_total_mask60_namaster_binned)
# np.save(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_namaster_binned.npy"), filter_dls_total_mask60_namaster_binned)
filterCls_total_mask60_namaster_binned = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_cls_total_mask60_namaster_binned.npy")).reshape(nbmc,4,int(bin5.get_n_bands()))
filterDls_total_mask60_namaster_binned = np.load(os.path.join(path,"Sim_filters/CMB+Noise+Mask/filter_dls_total_mask60_namaster_binned.npy")).reshape(nbmc,4,int(bin5.get_n_bands()))
filterCls_total_mask60_namaster_binned_mean_int = interpolation(ell_eff,np.mean(filterCls_total_mask60_namaster_binned,axis=0),np.arange(2,3*nside))
SNMF_T = filterCls_total_mask60_namaster_binned_mean_int[3]/filterCls_total_mask60_namaster_binned_mean_int[0]
SNMF_E = filterCls_total_mask60_namaster_binned_mean_int[3]/filterCls_total_mask60_namaster_binned_mean_int[1]
SNFM_T = interpolation(ell_eff, np.mean(filterCls_total_mask60_namaster_binned[:,3]/filterCls_total_mask60_namaster_binned[:,0],axis=0), np.arange(2,3*nside))
SNFM_E = interpolation(ell_eff, np.mean(filterCls_total_mask60_namaster_binned[:,3]/filterCls_total_mask60_namaster_binned[:,1],axis=0), np.arange(2,3*nside))
# np.save(os.path.join(path,"Correlated/SNMF_T.npy"), SNMF_T)
# np.save(os.path.join(path,"Correlated/SNMF_E.npy"), SNMF_E)
# np.save(os.path.join(path,"Correlated/SNFM_T.npy"), SNFM_T)
# np.save(os.path.join(path,"Correlated/SNFM_E.npy"), SNFM_E)
SAMF_T = np.load(os.path.join(path,"Correlated/SAMF_T.npy"))
SAMF_E = np.load(os.path.join(path,"Correlated/SAMF_E.npy"))
SAFM_T = np.load(os.path.join(path,"Correlated/SAFM_T.npy"))
SAFM_E = np.load(os.path.join(path,"Correlated/SAFM_E.npy"))
SNMF_T = np.load(os.path.join(path,"Correlated/SNMF_T.npy"))
SNMF_E = np.load(os.path.join(path,"Correlated/SNMF_E.npy"))
SNFM_T = np.load(os.path.join(path,"Correlated/SNFM_T.npy"))
SNFM_E = np.load(os.path.join(path,"Correlated/SNFM_E.npy"))
%matplotlib inline
fig, ax = plt.subplots(1, 2, figsize = (16,5))
# ax[0,0].set_title('Filter for almT')
ax[0].plot(l[2:2*nside], SAMF_T[:2*nside-2], label='SAMF',linewidth=3,color='C0')
ax[0].plot(l[2:2*nside], SAFM_T[:2*nside-2], linestyle='dotted', label='SAFM', linewidth=3,color='C1')
ax[0].plot(l[2:2*nside], SNMF_T[:2*nside-2], label='SNMF',linewidth=3,color='C2')
ax[0].plot(l[2:2*nside], SNFM_T[:2*nside-2], linestyle='-.', label='SNFM',linewidth=3,color='C3')
ax[0].plot(l[2:2*nside], WT_th_noise[:2*nside-2], color='black', alpha=0.7, linestyle='dashed',label='THEO',linewidth=3)
# ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$w_T$')
ax[0].set_xlabel(r'$\ell$')
# ax[0].grid()
ax[0].legend()
ax[1].plot(l[2:2*nside], SAMF_E[:2*nside-2], label='SAMF',linewidth=3,color='C0')
ax[1].plot(l[2:2*nside], SAFM_E[:2*nside-2], linestyle='dotted', label='SAFM',linewidth=3,color='C1')
ax[1].plot(l[2:2*nside], SNMF_E[:2*nside-2], label='SNMF',linewidth=3,color='C2')
ax[1].plot(l[2:2*nside], SNFM_E[:2*nside-2], linestyle='-.', label='SNFM',linewidth=3,color='C3')
ax[1].plot(l[2:2*nside], WE_th_noise[:2*nside-2], color='black', alpha=0.7, linestyle='dashed',label='THEO',linewidth=3)
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$w_E$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
plt.show()
Once we have calculated the filters from masked maps, we apply them to the first set of simulations (also masked) to obtain the correlated maps. We will compare them with the correlated maps obtained with an all sky theoretical filter in all sky maps (ideal correlated maps).
dev_SAMF = np.zeros((nbmc,4),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_total_ET_mask60_SAMF = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(SAMF_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_SAMF = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(SAMF_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_SAMF = hp.sphtfunc.alm2map(alm_total_ET_mask60_SAMF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_SAMF = hp.sphtfunc.alm2map(alm_total_TE_mask60_SAMF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_SAMF = mask60_total_map_E - mask60_map_total_EcT_SAMF
mask60_map_total_TncE_SAMF = mask60_total_maps[0] - mask60_map_total_TcE_SAMF
resid_EcT_map_ideal_SAMF = ideal_EcT_map_masked - mask60_map_total_EcT_SAMF
resid_TcE_map_ideal_SAMF = ideal_TcE_map_masked - mask60_map_total_TcE_SAMF
resid_EncT_map_ideal_SAMF = ideal_EncT_map_masked - mask60_map_total_EncT_SAMF
resid_TncE_map_ideal_SAMF = ideal_TncE_map_masked - mask60_map_total_TncE_SAMF
resid_EcT_map_ideal_SAMF_masked = hp.ma(resid_EcT_map_ideal_SAMF)
resid_TcE_map_ideal_SAMF_masked = hp.ma(resid_TcE_map_ideal_SAMF)
resid_EncT_map_ideal_SAMF_masked = hp.ma(resid_EncT_map_ideal_SAMF)
resid_TncE_map_ideal_SAMF_masked = hp.ma(resid_TncE_map_ideal_SAMF)
resid_EcT_map_ideal_SAMF_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_SAMF_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_SAMF_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_SAMF_masked.mask = np.logical_not(mask)
dev_SAMF[i,0] = np.std(resid_EcT_map_ideal_SAMF_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_SAMF[i,1] = np.std(resid_TcE_map_ideal_SAMF_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_SAMF[i,2] = np.std(resid_EncT_map_ideal_SAMF_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_SAMF[i,3] = np.std(resid_TncE_map_ideal_SAMF_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
dev_SAFM = np.zeros((nbmc,4),float) #dev_anafast
cls_SAFM = np.zeros((nbmc,4,lmax+1),float)
cls_ideal = np.zeros((nbmc,4,lmax+1),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
cls_ideal[i,0] = hp.anafast(ideal_EcT_map_masked)/np.mean(mask60)
cls_ideal[i,1] = hp.anafast(ideal_TcE_map_masked)/np.mean(mask60)
cls_ideal[i,2] = hp.anafast(ideal_EncT_map_masked)/np.mean(mask60)
cls_ideal[i,3] = hp.anafast(ideal_TncE_map_masked)/np.mean(mask60)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_total_ET_mask60_SAFM = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(SAFM_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_SAFM = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(SAFM_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_SAFM = hp.sphtfunc.alm2map(alm_total_ET_mask60_SAFM, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_SAFM = hp.sphtfunc.alm2map(alm_total_TE_mask60_SAFM, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_SAFM = mask60_total_map_E - mask60_map_total_EcT_SAFM
mask60_map_total_TncE_SAFM = mask60_total_maps[0] - mask60_map_total_TcE_SAFM
resid_EcT_map_ideal_SAFM = ideal_EcT_map_masked - mask60_map_total_EcT_SAFM
resid_TcE_map_ideal_SAFM = ideal_TcE_map_masked - mask60_map_total_TcE_SAFM
resid_EncT_map_ideal_SAFM = ideal_EncT_map_masked - mask60_map_total_EncT_SAFM
resid_TncE_map_ideal_SAFM = ideal_TncE_map_masked - mask60_map_total_TncE_SAFM
resid_EcT_map_ideal_SAFM_masked = hp.ma(resid_EcT_map_ideal_SAFM)
resid_TcE_map_ideal_SAFM_masked = hp.ma(resid_TcE_map_ideal_SAFM)
resid_EncT_map_ideal_SAFM_masked = hp.ma(resid_EncT_map_ideal_SAFM)
resid_TncE_map_ideal_SAFM_masked = hp.ma(resid_TncE_map_ideal_SAFM)
resid_EcT_map_ideal_SAFM_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_SAFM_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_SAFM_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_SAFM_masked.mask = np.logical_not(mask)
dev_SAFM[i,0] = np.std(resid_EcT_map_ideal_SAFM_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_SAFM[i,1] = np.std(resid_TcE_map_ideal_SAFM_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_SAFM[i,2] = np.std(resid_EncT_map_ideal_SAFM_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_SAFM[i,3] = np.std(resid_TncE_map_ideal_SAFM_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
cls_SAFM[i,0] = hp.anafast(resid_EcT_map_ideal_SAFM_masked)/np.mean(mask60)
cls_SAFM[i,1] = hp.anafast(resid_TcE_map_ideal_SAFM_masked)/np.mean(mask60)
cls_SAFM[i,2] = hp.anafast(resid_EncT_map_ideal_SAFM_masked)/np.mean(mask60)
cls_SAFM[i,3] = hp.anafast(resid_TncE_map_ideal_SAFM_masked)/np.mean(mask60)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Deviations/dev_SAMF.npy"), dev_SAMF)
# np.save(os.path.join(path,"Deviations/dev_SAFM.npy"), dev_SAFM)
# np.save(os.path.join(path,"Deviations/cls_SAFM.npy"), cls_SAFM)
# np.save(os.path.join(path,"Deviations/cls_ideal.npy"), cls_ideal)
cls_SAFM = np.load(os.path.join(path,"Deviations/cls_SAFM.npy"))
cls_ideal = np.load(os.path.join(path,"Deviations/cls_ideal.npy"))
# plt.suptitle('SAFM (best) filter')
%matplotlib inline
fig, ax = plt.subplots(2,2, figsize = (20,15))
ax[0,0].plot(l[2:2*nside], np.mean(cls_ideal,axis=0)[0,2:2*nside], label=r'$\langle C_{\ell}^{ideal} \rangle_{100}$',linestyle='dashed',linewidth=3)
ax[0,0].plot(l[2:2*nside], np.mean(cls_SAFM, axis=0)[0,2:2*nside], color='orange', label=r'$\langle C_{\ell}^{resid} \rangle_{100}$',linewidth=3)
ax[0,0].set_yscale('log')
ax[0,0].set_xscale('log')
ax[0,0].set_ylabel(r'$D_\ell^{EcT} \quad [\mu K^2]$')
ax[0,0].set_xlabel(r'$\ell$')
ax[0,0].legend()
ax[0,1].plot(l[2:2*nside], np.mean(cls_ideal,axis=0)[1,2:2*nside], label=r'$\langle C_{\ell}^{ideal} \rangle_{100}$',linestyle='dashed',linewidth=3)
ax[0,1].plot(l[2:2*nside], np.mean(cls_SAFM, axis=0)[1,2:2*nside], color='orange', label=r'$\langle C_{\ell}^{resid} \rangle_{100}$',linewidth=3)
ax[0,1].set_yscale('log')
ax[0,1].set_xscale('log')
ax[0,1].set_ylabel(r'$D_\ell^{TcE} \quad [\mu K^2]$')
ax[0,1].set_xlabel(r'$\ell$')
ax[0,1].legend()
ax[1,0].plot(l[2:2*nside], np.mean(cls_ideal,axis=0)[2,2:2*nside], label=r'$\langle C_{\ell}^{ideal} \rangle_{100}$',linestyle='dashed',linewidth=3)
ax[1,0].plot(l[2:2*nside], np.mean(cls_SAFM, axis=0)[2,2:2*nside], color='orange', label=r'$\langle C_{\ell}^{resid} \rangle_{100}$',linewidth=3)
ax[1,0].set_yscale('log')
ax[1,0].set_xscale('log')
ax[1,0].set_ylabel(r'$D_\ell^{EncT} \quad [\mu K^2]$')
ax[1,0].set_xlabel(r'$\ell$')
ax[1,0].legend()
ax[1,1].plot(l[2:2*nside], np.mean(cls_ideal,axis=0)[3,2:2*nside], label=r'$\langle C_{\ell}^{ideal} \rangle_{100}$',linestyle='dashed',linewidth=3)
ax[1,1].plot(l[2:2*nside], np.mean(cls_SAFM, axis=0)[3,2:2*nside], color='orange', label=r'$\langle C_{\ell}^{resid} \rangle_{100}$',linewidth=3)
ax[1,1].set_yscale('log')
ax[1,1].set_xscale('log')
ax[1,1].set_ylabel(r'$D_\ell^{TncE} \quad [\mu K^2]$')
ax[1,1].set_xlabel(r'$\ell$')
ax[1,1].legend()
plt.show()
dev_SNMF = np.zeros((nbmc,4),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
alm_total_ET_mask60_SNMF = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(SNMF_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_SNMF = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(SNMF_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_SNMF = hp.sphtfunc.alm2map(alm_total_ET_mask60_SNMF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_SNMF = hp.sphtfunc.alm2map(alm_total_TE_mask60_SNMF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_SNMF = mask60_total_map_E - mask60_map_total_EcT_SNMF
mask60_map_total_TncE_SNMF = mask60_total_maps[0] - mask60_map_total_TcE_SNMF
resid_EcT_map_ideal_SNMF = ideal_EcT_map_masked - mask60_map_total_EcT_SNMF
resid_TcE_map_ideal_SNMF = ideal_TcE_map_masked - mask60_map_total_TcE_SNMF
resid_EncT_map_ideal_SNMF = ideal_EncT_map_masked - mask60_map_total_EncT_SNMF
resid_TncE_map_ideal_SNMF = ideal_TncE_map_masked - mask60_map_total_TncE_SNMF
resid_EcT_map_ideal_SNMF_masked = hp.ma(resid_EcT_map_ideal_SNMF)
resid_TcE_map_ideal_SNMF_masked = hp.ma(resid_TcE_map_ideal_SNMF)
resid_EncT_map_ideal_SNMF_masked = hp.ma(resid_EncT_map_ideal_SNMF)
resid_TncE_map_ideal_SNMF_masked = hp.ma(resid_TncE_map_ideal_SNMF)
resid_EcT_map_ideal_SNMF_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_SNMF_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_SNMF_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_SNMF_masked.mask = np.logical_not(mask)
dev_SNMF[i,0] = np.std(resid_EcT_map_ideal_SNMF_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_SNMF[i,1] = np.std(resid_TcE_map_ideal_SNMF_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_SNMF[i,2] = np.std(resid_EncT_map_ideal_SNMF_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_SNMF[i,3] = np.std(resid_TncE_map_ideal_SNMF_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
dev_SNFM = np.zeros((nbmc,4),float) #dev_namaster_binned
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Simulations:
alm_total_ET_mask60_SNFM = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(SNFM_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_SNFM = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(SNFM_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_SNFM = hp.sphtfunc.alm2map(alm_total_ET_mask60_SNFM, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_SNFM = hp.sphtfunc.alm2map(alm_total_TE_mask60_SNFM, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_SNFM = mask60_total_map_E - mask60_map_total_EcT_SNFM
mask60_map_total_TncE_SNFM = mask60_total_maps[0] - mask60_map_total_TcE_SNFM
resid_EcT_map_ideal_SNFM = ideal_EcT_map_masked - mask60_map_total_EcT_SNFM
resid_TcE_map_ideal_SNFM = ideal_TcE_map_masked - mask60_map_total_TcE_SNFM
resid_EncT_map_ideal_SNFM = ideal_EncT_map_masked - mask60_map_total_EncT_SNFM
resid_TncE_map_ideal_SNFM = ideal_TncE_map_masked - mask60_map_total_TncE_SNFM
resid_EcT_map_ideal_SNFM_masked = hp.ma(resid_EcT_map_ideal_SNFM)
resid_TcE_map_ideal_SNFM_masked = hp.ma(resid_TcE_map_ideal_SNFM)
resid_EncT_map_ideal_SNFM_masked = hp.ma(resid_EncT_map_ideal_SNFM)
resid_TncE_map_ideal_SNFM_masked = hp.ma(resid_TncE_map_ideal_SNFM)
resid_EcT_map_ideal_SNFM_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_SNFM_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_SNFM_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_SNFM_masked.mask = np.logical_not(mask)
dev_SNFM[i,0] = np.std(resid_EcT_map_ideal_SNFM_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_SNFM[i,1] = np.std(resid_TcE_map_ideal_SNFM_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_SNFM[i,2] = np.std(resid_EncT_map_ideal_SNFM_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_SNFM[i,3] = np.std(resid_TncE_map_ideal_SNFM_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Deviations/dev_SNMF.npy"), dev_SNMF)
# np.save(os.path.join(path,"Deviations/dev_SNFM.npy"), dev_SNFM)
In order to cuantify how well the results are we will obtain the residual maps for each simulation and from them the dispersion. This results in a distribution from which we can obtain the mean value and the errors given a confidence intervals.
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
def errors_bars(array, ci, most_probable):
'''
Function which determines the errors given an array and the confidence interval in %
'''
area_inf = np.empty(len(array))
area_sup = np.empty(len(array))
sorted_array = np.sort(array,axis=0)
for i in np.arange(len(array)):
area_inf[i] = np.count_nonzero(sorted_array[i+1:] < most_probable, axis=0)
area_sup[i] = np.count_nonzero(sorted_array[:i+1] > most_probable, axis=0)
err_inf = (find_nearest(area_inf,ci/2),int(np.mean(np.where(area_inf==find_nearest(area_inf,ci/2)))))
inc_inf = (np.mean(array)-sorted_array[err_inf[1]])
err_sup = (find_nearest(area_sup,ci/2),int(np.mean(np.where(area_sup==find_nearest(area_sup,ci/2)))))
inc_sup = (sorted_array[err_sup[1]]-np.mean(array))
return sorted_array, err_inf, err_sup, inc_inf, inc_sup
def highlight_max(s):
'''
Highlight the maximum in a Series orange (pandas style).
'''
is_max = s == s.max()
return ['background-color: orange' if v else '' for v in is_max]
def highlight_min(s):
'''
Highlight the minimum in a Series blue (pandas style).
'''
is_min = s == s.min()
return ['background-color: cyan' if v else '' for v in is_min]
dev_th = np.load(os.path.join(path,"Deviations/dev_th.npy"))
dev_SAFM = np.load(os.path.join(path,"Deviations/dev_SAFM.npy"))
dev_SAMF = np.load(os.path.join(path,"Deviations/dev_SAMF.npy"))
dev_SNFM = np.load(os.path.join(path,"Deviations/dev_SNFM.npy"))
dev_SNMF = np.load(os.path.join(path,"Deviations/dev_SNMF.npy"))
col_names = ['EcT','TcE','EncT','TncE']
intervals_th = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution th filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_th[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_th[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_th[:,i],68,dev_max)
intervals_th[0,i,0] = dev_max
intervals_th[0,i,1] = inc_inf
intervals_th[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_th[:,i],95,dev_max)
intervals_th[1,i,0] = dev_max
intervals_th[1,i,1] = inc_inf
intervals_th[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_SAFM = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution SAFM filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SAFM[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SAFM[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM[:,i],68,dev_max)
intervals_SAFM[0,i,0] = dev_max
intervals_SAFM[0,i,1] = inc_inf
intervals_SAFM[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM[:,i],95,dev_max)
intervals_SAFM[1,i,0] = dev_max
intervals_SAFM[1,i,1] = inc_inf
intervals_SAFM[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_SAMF = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution SAMF filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SAMF[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SAMF[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAMF[:,i],68,dev_max)
intervals_SAMF[0,i,0] = dev_max
intervals_SAMF[0,i,1] = inc_inf
intervals_SAMF[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAMF[:,i],95,dev_max)
intervals_SAMF[1,i,0] = dev_max
intervals_SAMF[1,i,1] = inc_inf
intervals_SAMF[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_SNFM = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution SNFM filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SNFM[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SNFM[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SNFM[:,i],68,dev_max)
intervals_SNFM[0,i,0] = dev_max
intervals_SNFM[0,i,1] = inc_inf
intervals_SNFM[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SNFM[:,i],95,dev_max)
intervals_SNFM[1,i,0] = dev_max
intervals_SNFM[1,i,1] = inc_inf
intervals_SNFM[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_SNMF = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution SNMF filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SNMF[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SNMF[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SNMF[:,i],68,dev_max)
intervals_SNMF[0,i,0] = dev_max
intervals_SNMF[0,i,1] = inc_inf
intervals_SNMF[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SNMF[:,i],95,dev_max)
intervals_SNMF[1,i,0] = dev_max
intervals_SNMF[1,i,1] = inc_inf
intervals_SNMF[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
int68_inf_sims = (pd.DataFrame([intervals_SAFM[0,:,1],intervals_SAMF[0,:,1],intervals_SNFM[0,:,1],intervals_SNMF[0,:,1]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_sup_sims = (pd.DataFrame([intervals_SAFM[0,:,2],intervals_SAMF[0,:,2],intervals_SNFM[0,:,2],intervals_SNMF[0,:,2]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int95_inf_sims = (pd.DataFrame([intervals_SAFM[1,:,1],intervals_SAMF[1,:,1],intervals_SNFM[1,:,1],intervals_SNMF[1,:,1]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int95_sup_sims = (pd.DataFrame([intervals_SAFM[1,:,2],intervals_SAMF[1,:,2],intervals_SNFM[1,:,2],intervals_SNMF[1,:,2]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
maxs_sims = (pd.DataFrame([intervals_SAFM[0,:,0],intervals_SAMF[0,:,0],intervals_SNFM[0,:,0],intervals_SNMF[0,:,0]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).style.\
apply(highlight_max).\
apply(highlight_min)
maxs_sims
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| SAFM | 0.137713 | 0.240412 | 0.073292 | 0.133827 |
| SAMF | 0.138028 | 0.245359 | 0.073623 | 0.137215 |
| SNFM | 0.145829 | 0.249919 | 0.077381 | 0.139525 |
| SNMF | 0.145258 | 0.895857 | 0.077086 | 0.500363 |
maxs_sims = (pd.DataFrame([intervals_SAFM[0,:,0],intervals_SAMF[0,:,0],intervals_SNFM[0,:,0],intervals_SNMF[0,:,0]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(5)
int68_sims = pd.DataFrame(maxs_sims.astype(str) + "-" + (int68_inf_sims.round(5)).astype(str) + "+" + (int68_sup_sims.round(5)).astype(str),index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int68_sims.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 68% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| SAFM | 0.13771-0.00608+0.00106 | 0.24041-0.02345+0.01524 | 0.07329-0.00345+0.00056 | 0.13383-0.02013+0.00682 |
| SAMF | 0.13803-0.00606+0.00107 | 0.24536-0.02423+0.01784 | 0.07362-0.00203+0.00078 | 0.13721-0.0169+0.00918 |
| SNFM | 0.14583-0.00288+0.00298 | 0.24992-0.03008+0.02471 | 0.07738-0.00165+0.00134 | 0.13952-0.01741+0.01529 |
| SNMF | 0.14526-0.00308+0.00274 | 0.89586-0.17141+0.15037 | 0.07709-0.00153+0.00166 | 0.50036-0.08433+0.0867 |
maxs_sims = (pd.DataFrame([intervals_SAFM[0,:,0],intervals_SAMF[0,:,0],intervals_SNFM[0,:,0],intervals_SNMF[0,:,0]],index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(2)
int95_sims = pd.DataFrame(maxs_sims.astype(str) + "-" + (int95_inf_sims.round(5)).astype(str) + "+" + (int95_sup_sims.round(5)).astype(str),index = {'SAFM':0,'SAMF':1,'SNFM':2,'SNMF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int95_sims.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 95% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| SAFM | 0.14-0.00608+0.00187 | 0.24-0.0465+0.02468 | 0.07-0.00345+0.00117 | 0.13-0.02777+0.0133 |
| SAMF | 0.14-0.00606+0.00182 | 0.25-0.04974+0.02686 | 0.07-0.00344+0.00141 | 0.14-0.0295+0.01679 |
| SNFM | 0.15-0.00461+0.00857 | 0.25-0.06655+0.04428 | 0.08-0.0034+0.00284 | 0.14-0.03888+0.02661 |
| SNMF | 0.15-0.00629+0.00501 | 0.9-0.31395+0.32462 | 0.08-0.00296+0.00318 | 0.5-0.18195+0.17587 |
Until this point we have studied a set of 100 simulations from which we have obtained the Wiener filter and the corresponding correlated maps. As our purpose is to analyse the forthcoming observations obtained with LiteBIRD we need to extract conclusions from a single realization as we can only take measurements of one universe (our Universe). In most cases the observations will be noisy and so we will need to smooth them. To reach our objective, we perform a binning of the multipoles and an interpolation to reduce the noise.
Firstly, we can study full-sky CMB and noise simulations trying to recover the theoretical prediction.
def smooth(y, box_pts):
box = np.ones(box_pts)/box_pts
y_smooth = np.convolve(y ,box, mode='same')
return y_smooth
def smooth_cls(cls,index,cls_first):
ells = np.arange(3*nside, dtype='int32') # Array of multipoles
if index == 0:
ini = int(cls_first); a = 8;
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 5), smooth(cls[a:], 10)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 7), smooth(cls[a+3:], 12)))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-3], 3), smooth(cls[a-3:], 12)))
try_smooth_4 = np.concatenate((smooth(cls[ini:a-2], 3), smooth(cls[a-2:], 12)))
try_smooth_5 = np.concatenate((smooth(cls[ini:a+2], 5), smooth(cls[a+2:], 12)))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
if index == 1:
ini = int(cls_first);a = 15;
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 7), smooth(cls[a:], 15)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 5), smooth(cls[a+3:], 15)))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-3], 3), smooth(cls[a-3:], 15)))
try_smooth_4 = np.concatenate((smooth(cls[ini:a+5], 4), smooth(cls[a+5:], 15)))
try_smooth_5 = np.concatenate((smooth(cls[ini:a-5], 3), smooth(cls[a-5:], 15)))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
return try_mean
if index == 3:
ini = int(cls_first); a = 10; b = 25; c = 50; d = 830; e = 945; f=ini+1534
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 3), smooth(cls[a:b], 5), smooth(cls[b:c], 8), smooth(cls[c:d], 15), smooth(cls[d:e],90), smooth(cls[e:], f-e)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 3), smooth(cls[a+3:b-3], 5), smooth(cls[b-3:c+10], 8), smooth(cls[c+10:d-3], 15), smooth(cls[d-3:e+3],90), smooth(cls[e+3:], f-(e+3))))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-3], 3), smooth(cls[a-3:b+3], 5), smooth(cls[b+3:c-3], 8), smooth(cls[c-3:d+3], 15), smooth(cls[d+3:e-3],90), smooth(cls[e-3:], f-(e-3))))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3)),axis=0)
return try_mean
cls = np.array([0,1,3])
cls_total_smooth= np.zeros((100,4,1536),float)
for i in np.arange(100):
for c in np.arange(3):
cls_total_smooth[i,cls[c],2:] = smooth_cls(Cls_total[i,cls[c]],cls[c],2)
dls_total_smooth = (l*(l+1))* cls_total_smooth/ (2 * np.pi)
wT_smooth = np.empty((100,1534),float)
wE_smooth = np.empty((100,1534),float)
for i in np.arange(100): #mapas actualizados con estos que son los mejores 05/06
wT_smooth[i] = smooth(cls_total_smooth[i,3,2:]/cls_total_smooth[i,0,2:],3)
wE_smooth[i] = smooth(cls_total_smooth[i,3,2:]/cls_total_smooth[i,1,2:],9)
# np.save(os.path.join(path,"Data/Smooth_filter/cls_total_smooth.npy"), cls_total_smooth)
# np.save(os.path.join(path,"Data/Smooth_filter/dls_total_smooth.npy"), dls_total_smooth)
# np.save(os.path.join(path,"Data/Smooth_filter/wT_smooth.npy"), wT_smooth)
# np.save(os.path.join(path,"Data/Smooth_filter/wE_smooth.npy"), wE_smooth)
Cls_total_smooth = np.load(os.path.join(path,"Data/Smooth_filter/cls_total_smooth.npy"))
Dls_total_smooth = np.load(os.path.join(path,"Data/Smooth_filter/dls_total_smooth.npy"))
WT_smooth = np.load(os.path.join(path,"Data/Smooth_filter/wT_smooth.npy"))
WE_smooth = np.load(os.path.join(path,"Data/Smooth_filter/wE_smooth.npy"))
%matplotlib inline
fig, ax = plt.subplots(1,3, figsize = (30,8))
# ax[0].set_title('Temperature map')
ax[0].scatter(l[2:2*nside], Dls_total[0,0,2:2*nside],color='lightgreen',marker='*', label=r'One $C_{\ell}^{Anaf}$',linewidth=3)
ax[0].plot(l[2:2*nside], Dls_total_smooth[0,0,2:2*nside],color='orange', label=r'$C_{\ell}^{Smooth}$',linewidth=3)
ax[0].plot(ells_CAMB[2:2*nside], Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside], color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[0].fill_between(l[2:2*nside], (Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside]) - np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],(Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside]),fsky=1)),(Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside]) + np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],(Dls_CAMB[0,2:2*nside] * (fw_TEB[2:2*nside,0]**2*px[0][2:2*nside]**2) + Dls_noise_T[2:2*nside]),fsky=1)), color='gray', alpha=0.4)
ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].set_ylabel(r'$D_{\ell}^{TT} \quad [\mu K^2]$')
ax[0].set_xlabel(r'$\ell$')
ax[0].legend()
# ax[1].set_title('Temperature map')
ax[1].scatter(l[2:2*nside], Dls_total[0,1,2:2*nside],color='lightgreen',marker='*', label=r'One $C_{\ell}^{Anaf}$',linewidth=3)
ax[1].plot(l[2:2*nside], Dls_total_smooth[0,1,2:2*nside],color='orange', label=r'$C_{\ell}^{Smooth}$',linewidth=3)
ax[1].plot(ells_CAMB[2:2*nside], Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside], color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[1].fill_between(l[2:2*nside], (Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside]) - np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],(Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside]),fsky=1)),(Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside]) + np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],(Dls_CAMB[1,2:2*nside] * (fw_TEB[2:2*nside,1]**2*px[1][2:2*nside]**2) + Dls_noise_P[2:2*nside]),fsky=1)), color='gray', alpha=0.4)
ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].set_ylabel(r'$D_{\ell}^{EE} \quad [\mu K^2]$')
ax[1].set_xlabel(r'$\ell$')
ax[1].legend()
# ax[2].set_title('Temperature map')
ax[2].scatter(l[2:2*nside], Dls_total[0,3,2:2*nside],color='lightgreen',marker='*', label=r'One $C_{\ell}^{Anaf}$',linewidth=3)
ax[2].plot(l[2:2*nside], Dls_total_smooth[0,3,2:2*nside],color='orange', label=r'$C_{\ell}^{Smooth}$',linewidth=3)
ax[2].plot(ells_CAMB[2:2*nside], Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1] * px[0][2:2*nside]*px[1][2:2*nside]), color='k',linestyle='dashed',alpha=0.7,label=r'$C_{\ell}^{TH}$',linewidth=3)
ax[2].fill_between(l[2:2*nside], Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1] * px[0][2:2*nside]*px[1][2:2*nside]) - np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1] * px[0][2:2*nside]*px[1][2:2*nside]),fsky=1)), Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1] * px[0][2:2*nside]*px[1][2:2*nside]) + np.sqrt(cov_mat_cosmic_variance(l[2:2*nside],Dls_CAMB[3,2:2*nside] * (fw_TEB[2:2*nside,0]*fw_TEB[2:2*nside,1] * px[0][2:2*nside]*px[1][2:2*nside]),fsky=1)), color='gray', alpha=0.4)
# ax[2].set_yscale('log')
ax[2].set_xscale('log')
ax[2].set_ylabel(r'$D_{\ell}^{TE} \quad [\mu K^2]$')
ax[2].set_xlabel(r'$\ell$')
ax[2].legend()
plt.show()
We show the result of these smoothed spectra, $C_{\ell}^{Smooth}$, together with the theoretical prediction, $C_{\ell}^{TH}$ and a single realization obtained with $\verb+anafast+$, One $C_{\ell}^{Anaf}$. We can see that we are able to obtain a smooth curve from one CMB and noise realization for all the multipole range although at low values, where the cosmic variance is higher, it is more difficult to obtain the theoretical prediction. With these smoothed spectra we can compute the Wiener filter and compare it with the theoretical filter.
%matplotlib inline
fig, ax = plt.subplots(1,2, figsize = (15,5))
ax[0].plot(l[2:], WT_th_noise, color='navy', label=r'THEO',linewidth=3)
ax[0].plot(np.arange(2,lmax+1),wT_smooth[0],linewidth=3,linestyle='-.',label='Smooth',color='C2')
# ax[0].set_yscale('log')
ax[0].set_xscale('log')
ax[0].grid()
ax[0].legend()
ax[1].plot(l[2:], WE_th_noise, color='navy', label=r'THEO',linewidth=3)
ax[1].plot(np.arange(2,lmax+1),wE_smooth[0],linewidth=3,linestyle='-.',label='Smooth',color='C2')
# ax[1].set_yscale('log')
ax[1].set_xscale('log')
ax[1].grid()
ax[1].legend()
plt.suptitle('All sky maps')
plt.show()
Secondly, we need to analyse masked CMB and noise simulations and see if we are able to recover the theoretical estimations with the smoothing method. In an analogous way as we have donefor the simulations case, we need to find the best filter definition when a mask is included. This leads to four possible filters that are summarized in the following figure:

def smooth(y, box_pts):
box = np.ones(box_pts)/box_pts
y_smooth = np.convolve(y ,box, mode='same')
return y_smooth
def smooth_filter(raw_filter,index,ell_first):
ells = np.arange(3*nside, dtype='int32') # Array of multipoles
if int(index) == 0:
ini = int(ell_first); a = 830; b=945; c=ini+1534
try_smooth_1 = np.concatenate((smooth(raw_filter[ini:a], 30), smooth(raw_filter[a:b], 90), smooth(raw_filter[b:], c-b)))
try_smooth_2 = np.concatenate((smooth(raw_filter[ini:a+7], 30), smooth(raw_filter[a+7:b-7], 90), smooth(raw_filter[b-7:], c-(b-7))))
try_smooth_3 = np.concatenate((smooth(raw_filter[ini:a-8], 30), smooth(raw_filter[a-8:b+8], 90), smooth(raw_filter[b+8:], c-(b+8))))
try_smooth_4 = np.concatenate((smooth(raw_filter[ini:a-6], 30), smooth(raw_filter[a-6:b+6], 90), smooth(raw_filter[b+6:], c-(b+6))))
try_smooth_5 = np.concatenate((smooth(raw_filter[ini:a+10], 30), smooth(raw_filter[a+10:b-10], 90), smooth(raw_filter[b-10:], c-(b-10))))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
return try_mean
if int(index) == 1:
ini = int(ell_first); a = 50; b=200; c=ini+1534
try_smooth_1 = np.concatenate((smooth(raw_filter[ini:a], 11), smooth(raw_filter[a:b], 50), smooth(raw_filter[b:], 70)))
try_smooth_2 = np.concatenate((smooth(raw_filter[ini:a+3], 15), smooth(raw_filter[a+3:b-3], 50), smooth(raw_filter[b-3:], 70)))
try_smooth_3 = np.concatenate((smooth(raw_filter[ini:a-4], 10), smooth(raw_filter[a-4:b+4], 50), smooth(raw_filter[b+4:], 70)))
try_smooth_4 = np.concatenate((smooth(raw_filter[ini:a-2], 12), smooth(raw_filter[a-2:b+2], 50), smooth(raw_filter[b+2:], 70)))
try_smooth_5 = np.concatenate((smooth(raw_filter[ini:a+5], 18), smooth(raw_filter[a+5:b-5], 50), smooth(raw_filter[b-5:], 70)))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
return try_mean
def smooth_cls_mask(cls,index,cls_first):
ells = np.arange(3*nside, dtype='int32') # Array of multipoles
if index == 0:
ini = int(cls_first); a = 8;
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 5), smooth(cls[a:], 10)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 7), smooth(cls[a+3:], 12)))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-3], 3), smooth(cls[a-3:], 12)))
try_smooth_4 = np.concatenate((smooth(cls[ini:a-2], 3), smooth(cls[a-2:], 12)))
try_smooth_5 = np.concatenate((smooth(cls[ini:a+2], 5), smooth(cls[a+2:], 12)))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
if index == 1:
ini = int(cls_first);a = 15;
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 8), smooth(cls[a:], 10)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 9), smooth(cls[a+3:], 15)))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-4], 7), smooth(cls[a-4:], 12)))
try_smooth_4 = np.concatenate((smooth(cls[ini:a-2], 7), smooth(cls[a-2:], 10)))
try_smooth_5 = np.concatenate((smooth(cls[ini:a+5], 11), smooth(cls[a+5:], 11)))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3,try_smooth_4,try_smooth_5)),axis=0)
return try_mean
if index == 3:
ini = int(cls_first); a = 10; b = 25; c = 50; d = 830; e = 945; f=ini+1534
try_smooth_1 = np.concatenate((smooth(cls[ini:a], 3), smooth(cls[a:b], 5), smooth(cls[b:c], 8), smooth(cls[c:d], 15), smooth(cls[d:e],90), smooth(cls[e:], f-e)))
try_smooth_2 = np.concatenate((smooth(cls[ini:a+3], 3), smooth(cls[a+3:b-3], 5), smooth(cls[b-3:c+10], 8), smooth(cls[c+10:d-3], 15), smooth(cls[d-3:e+3],90), smooth(cls[e+3:], f-(e+3))))
try_smooth_3 = np.concatenate((smooth(cls[ini:a-3], 3), smooth(cls[a-3:b+3], 5), smooth(cls[b+3:c-3], 8), smooth(cls[c-3:d+3], 15), smooth(cls[d+3:e-3],90), smooth(cls[e-3:], f-(e-3))))
try_mean = np.mean(np.vstack((try_smooth_1,try_smooth_2,try_smooth_3)),axis=0)
return try_mean
# Smooth sobre masked maps (DATA)
cls = np.array([0,1,3])
cls_total_masked_anafast_smooth= np.zeros((100,4,1536),float)
cls_total_masked_namaster_smooth= np.zeros((100,4,1536),float)
for i in np.arange(100):
for c in np.arange(3):
cls_total_masked_anafast_smooth[i,cls[c],2:] = smooth_cls_mask(Cls_total_mask60_anafast[i,cls[c]],cls[c],2)
cls_total_masked_namaster_smooth[i,cls[c],2:] = smooth_cls_mask(Cls_total_mask60_namaster[i,cls[c]],cls[c],0)
dls_total_masked_anafast_smooth = (l*(l+1)) * cls_total_masked_anafast_smooth / (2 * np.pi)
dls_total_masked_namaster_smooth = (l*(l+1)) * cls_total_masked_namaster_smooth / (2 * np.pi)
# Representar espectros THEO, ANFAST y SMOOTH de los mapas totales (aun sin enmascarar)
# CAMB comparado con simulation para TT,EE,TE
%matplotlib inline
fig, ax = plt.subplots(2,3, figsize = (30,15))
ax[0,0].plot(l[2:], Dls_total_mask60_anafast[0,0,2:], label='Anafast',linewidth=3)
ax[0,0].plot(l[2:], dls_total_masked_anafast_smooth[0,0,2:], label='Smooth',linewidth=3)
ax[0,0].set_yscale('log')
ax[0,0].set_xscale('log')
ax[0,0].set_title('Theoretical and simulated TT')
ax[0,0].set_ylabel(r'$D_\ell^{TT}$')
ax[0,0].set_xlabel(r'$\ell$')
ax[0,0].legend()
ax[0,1].plot(l[2:], Dls_total_mask60_anafast[0,1,2:], label='Anafast',linewidth=3)
ax[0,1].plot(l[2:], dls_total_masked_anafast_smooth[0,1,2:], label='Smooth',linewidth=3)
ax[0,1].set_yscale('log')
ax[0,1].set_xscale('log')
ax[0,1].set_title('Theoretical and simulated EE')
ax[0,1].set_ylabel(r'$D_\ell^{EE}$')
ax[0,1].set_xlabel(r'$\ell$')
ax[0,1].legend()
ax[0,2].plot(l[2:], Dls_total_mask60_anafast[0,3,2:], label='Anafast',linewidth=3)
ax[0,2].plot(l[2:], dls_total_masked_anafast_smooth[0,3,2:], label='Smooth',linewidth=3)
# ax[0,2].set_yscale('log')
ax[0,2].set_xscale('log')
ax[0,2].set_title('Theoretical and simulated TE')
ax[0,2].set_ylabel(r'$D_\ell^{TE}$')
ax[0,2].set_xlabel(r'$\ell$')
ax[0,2].legend()
ax[1,0].plot(l[2:], Dls_total_mask60_namaster[0,0], label='Namaster',linewidth=3)
ax[1,0].plot(l[2:], dls_total_masked_namaster_smooth[0,0,2:], label='Smooth',linewidth=3)
ax[1,0].set_yscale('log')
ax[1,0].set_xscale('log')
ax[1,0].set_title('Theoretical and simulated TT')
ax[1,0].set_ylabel(r'$D_\ell^{TT}$')
ax[1,0].set_xlabel(r'$\ell$')
ax[1,0].legend()
ax[1,1].plot(l[2:], Dls_total_mask60_namaster[0,1], label='Namaster',linewidth=3)
ax[1,1].plot(l[2:], dls_total_masked_namaster_smooth[0,1,2:], label='Smooth',linewidth=3)
ax[1,1].set_yscale('log')
ax[1,1].set_xscale('log')
ax[1,1].set_title('Theoretical and simulated EE')
ax[1,1].set_ylabel(r'$D_\ell^{EE}$')
ax[1,1].set_xlabel(r'$\ell$')
ax[1,1].legend()
ax[1,2].plot(l[2:], Dls_total_mask60_namaster[0,3], label='Namaster',linewidth=3)
ax[1,2].plot(l[2:], dls_total_masked_namaster_smooth[0,3,2:], label='Smooth',linewidth=3)
# ax[1,2].set_yscale('log')
ax[1,2].set_xscale('log')
ax[1,2].set_title('Theoretical and simulated TE')
ax[1,2].set_ylabel(r'$D_\ell^{TE}$')
ax[1,2].set_xlabel(r'$\ell$')
ax[1,2].legend()
plt.suptitle('Masked maps')
plt.show()
wT_anafast_raw = Cls_total_mask60_anafast[:,3,2:]/Cls_total_mask60_anafast[:,0,2:]
wE_anafast_raw = Cls_total_mask60_anafast[:,3,2:]/Cls_total_mask60_anafast[:,1,2:]
wT_namaster_raw = Cls_total_mask60_namaster[:,3]/Cls_total_mask60_namaster[:,0]
wE_namaster_raw = Cls_total_mask60_namaster[:,3]/Cls_total_mask60_namaster[:,1]
# Filter smoothing
wT_total_masked_anafast_smooth= np.ones((100,1536),float)
wE_total_masked_anafast_smooth= np.ones((100,1536),float)
wT_total_masked_namaster_smooth= np.ones((100,1536),float)
wE_total_masked_namaster_smooth= np.ones((100,1536),float)
for i in np.arange(100):
wT_total_masked_anafast_smooth[i,2:] = smooth_filter(wT_anafast_raw[i],0,0)
wE_total_masked_anafast_smooth[i,2:] = smooth_filter(wE_anafast_raw[i],1,0)
wT_total_masked_namaster_smooth[i,2:] = smooth_filter(wT_namaster_raw[i],0,0)
wE_total_masked_namaster_smooth[i,2:] = smooth_filter(wE_namaster_raw[i],1,0)
DASF_T = cls_total_masked_anafast_smooth[:,3,2:]/cls_total_masked_anafast_smooth[:,0,2:]
DASF_E = cls_total_masked_anafast_smooth[:,3,2:]/cls_total_masked_anafast_smooth[:,1,2:]
DAFS_T = wT_total_masked_anafast_smooth[:,2:]
DAFS_E = wE_total_masked_anafast_smooth[:,2:]
DNSF_T = cls_total_masked_namaster_smooth[:,3,2:]/cls_total_masked_namaster_smooth[:,0,2:]
DNSF_E = cls_total_masked_namaster_smooth[:,3,2:]/cls_total_masked_namaster_smooth[:,1,2:]
DNFS_T = wT_total_masked_namaster_smooth[:,2:]
DNFS_E = wE_total_masked_namaster_smooth[:,2:]
# np.save(os.path.join(path,"Correlated/DASF_T.npy"), DASF_T)
# np.save(os.path.join(path,"Correlated/DASF_E.npy"), DASF_E)
# np.save(os.path.join(path,"Correlated/DAFS_T.npy"), DAFS_T)
# np.save(os.path.join(path,"Correlated/DAFS_E.npy"), DAFS_E)
# np.save(os.path.join(path,"Correlated/DNSF_T.npy"), DNSF_T)
# np.save(os.path.join(path,"Correlated/DNSF_E.npy"), DNSF_E)
# np.save(os.path.join(path,"Correlated/DNFS_T.npy"), DNFS_T)
# np.save(os.path.join(path,"Correlated/DNFS_E.npy"), DNFS_E)
DASF_T = np.load(os.path.join(path,"Correlated/DASF_T.npy"))
DASF_E = np.load(os.path.join(path,"Correlated/DASF_E.npy"))
DAFS_T = np.load(os.path.join(path,"Correlated/DAFS_T.npy"))
DAFS_E = np.load(os.path.join(path,"Correlated/DAFS_E.npy"))
DNSF_T = np.load(os.path.join(path,"Correlated/DNSF_T.npy"))
DNSF_E = np.load(os.path.join(path,"Correlated/DNSF_E.npy"))
DNFS_T = np.load(os.path.join(path,"Correlated/DNFS_T.npy"))
DNFS_E = np.load(os.path.join(path,"Correlated/DNFS_E.npy"))
We can compare the results obtained with the smoothing method (x2) with the raw power spectrum from individual realizations to see how much we improve by smoothing. Not only that but we can compare these results with the previously obtained in previous section (Masked correlated maps (CMB+Noise))
dev_DASF = np.zeros((nbmc,4),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Data (Anafast):
alm_total_ET_mask60_DASF = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(DASF_T[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_DASF = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(DASF_E[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_DASF = hp.sphtfunc.alm2map(alm_total_ET_mask60_DASF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_DASF = hp.sphtfunc.alm2map(alm_total_TE_mask60_DASF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_DASF = mask60_total_map_E - mask60_map_total_EcT_DASF
mask60_map_total_TncE_DASF = mask60_total_maps[0] - mask60_map_total_TcE_DASF
resid_EcT_map_ideal_DASF = ideal_EcT_map_masked - mask60_map_total_EcT_DASF
resid_TcE_map_ideal_DASF = ideal_TcE_map_masked - mask60_map_total_TcE_DASF
resid_EncT_map_ideal_DASF = ideal_EncT_map_masked - mask60_map_total_EncT_DASF
resid_TncE_map_ideal_DASF = ideal_TncE_map_masked - mask60_map_total_TncE_DASF
resid_EcT_map_ideal_DASF_masked = hp.ma(resid_EcT_map_ideal_DASF)
resid_TcE_map_ideal_DASF_masked = hp.ma(resid_TcE_map_ideal_DASF)
resid_EncT_map_ideal_DASF_masked = hp.ma(resid_EncT_map_ideal_DASF)
resid_TncE_map_ideal_DASF_masked = hp.ma(resid_TncE_map_ideal_DASF)
resid_EcT_map_ideal_DASF_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_DASF_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_DASF_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_DASF_masked.mask = np.logical_not(mask)
dev_DASF[i,0] = np.std(resid_EcT_map_ideal_DASF_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_DASF[i,1] = np.std(resid_TcE_map_ideal_DASF_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_DASF[i,2] = np.std(resid_EncT_map_ideal_DASF_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_DASF[i,3] = np.std(resid_TncE_map_ideal_DASF_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
dev_DAFS = np.zeros((nbmc,4),float)
cls_DAFS = np.zeros((nbmc,4,lmax+1),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Data (Anafast):
alm_total_ET_mask60_DAFS = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(DAFS_T[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_DAFS = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(DAFS_E[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_DAFS = hp.sphtfunc.alm2map(alm_total_ET_mask60_DAFS, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_DAFS = hp.sphtfunc.alm2map(alm_total_TE_mask60_DAFS, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_DAFS = mask60_total_map_E - mask60_map_total_EcT_DAFS
mask60_map_total_TncE_DAFS = mask60_total_maps[0] - mask60_map_total_TcE_DAFS
resid_EcT_map_ideal_DAFS = ideal_EcT_map_masked - mask60_map_total_EcT_DAFS
resid_TcE_map_ideal_DAFS = ideal_TcE_map_masked - mask60_map_total_TcE_DAFS
resid_EncT_map_ideal_DAFS = ideal_EncT_map_masked - mask60_map_total_EncT_DAFS
resid_TncE_map_ideal_DAFS = ideal_TncE_map_masked - mask60_map_total_TncE_DAFS
resid_EcT_map_ideal_DAFS_masked = hp.ma(resid_EcT_map_ideal_DAFS)
resid_TcE_map_ideal_DAFS_masked = hp.ma(resid_TcE_map_ideal_DAFS)
resid_EncT_map_ideal_DAFS_masked = hp.ma(resid_EncT_map_ideal_DAFS)
resid_TncE_map_ideal_DAFS_masked = hp.ma(resid_TncE_map_ideal_DAFS)
resid_EcT_map_ideal_DAFS_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_DAFS_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_DAFS_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_DAFS_masked.mask = np.logical_not(mask)
dev_DAFS[i,0] = np.std(resid_EcT_map_ideal_DAFS_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_DAFS[i,1] = np.std(resid_TcE_map_ideal_DAFS_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_DAFS[i,2] = np.std(resid_EncT_map_ideal_DAFS_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_DAFS[i,3] = np.std(resid_TncE_map_ideal_DAFS_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
cls_DAFS[i,0] = hp.anafast(resid_EcT_map_ideal_DAFS_masked)/np.mean(mask60)
cls_DAFS[i,1] = hp.anafast(resid_TcE_map_ideal_DAFS_masked)/np.mean(mask60)
cls_DAFS[i,2] = hp.anafast(resid_EncT_map_ideal_DAFS_masked)/np.mean(mask60)
cls_DAFS[i,3] = hp.anafast(resid_TncE_map_ideal_DAFS_masked)/np.mean(mask60)
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Deviations/dev_DASF.npy"), dev_DASF)
# np.save(os.path.join(path,"Deviations/dev_DAFS.npy"), dev_DAFS)
# np.save(os.path.join(path,"Deviations/cls_DAFS.npy"), cls_DAFS)
dev_DNSF = np.zeros((nbmc,4),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Data (Anafast):
alm_total_ET_mask60_DNSF = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(DNSF_T[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_DNSF = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(DNSF_E[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_DNSF = hp.sphtfunc.alm2map(alm_total_ET_mask60_DNSF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_DNSF = hp.sphtfunc.alm2map(alm_total_TE_mask60_DNSF, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_DNSF = mask60_total_map_E - mask60_map_total_EcT_DNSF
mask60_map_total_TncE_DNSF = mask60_total_maps[0] - mask60_map_total_TcE_DNSF
resid_EcT_map_ideal_DNSF = ideal_EcT_map_masked - mask60_map_total_EcT_DNSF
resid_TcE_map_ideal_DNSF = ideal_TcE_map_masked - mask60_map_total_TcE_DNSF
resid_EncT_map_ideal_DNSF = ideal_EncT_map_masked - mask60_map_total_EncT_DNSF
resid_TncE_map_ideal_DNSF = ideal_TncE_map_masked - mask60_map_total_TncE_DNSF
resid_EcT_map_ideal_DNSF_masked = hp.ma(resid_EcT_map_ideal_DNSF)
resid_TcE_map_ideal_DNSF_masked = hp.ma(resid_TcE_map_ideal_DNSF)
resid_EncT_map_ideal_DNSF_masked = hp.ma(resid_EncT_map_ideal_DNSF)
resid_TncE_map_ideal_DNSF_masked = hp.ma(resid_TncE_map_ideal_DNSF)
resid_EcT_map_ideal_DNSF_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_DNSF_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_DNSF_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_DNSF_masked.mask = np.logical_not(mask)
dev_DNSF[i,0] = np.std(resid_EcT_map_ideal_DNSF_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_DNSF[i,1] = np.std(resid_TcE_map_ideal_DNSF_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_DNSF[i,2] = np.std(resid_EncT_map_ideal_DNSF_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_DNSF[i,3] = np.std(resid_TncE_map_ideal_DNSF_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
dev_DNFS = np.zeros((nbmc,4),float)
# for i in np.arange(nbmc):
# hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
# corr_th_total_maps = hdul_ideal[i+1].data
# hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(total_maps[0])
total_map_Q = hp.pixelfunc.remove_monopole(total_maps[1])
total_map_U = hp.pixelfunc.remove_monopole(total_maps[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(total_map_E)
total_map_E = hp.pixelfunc.remove_dipole(total_map_E)
# (1) Enmascarar (sin apodizar)
mask60_total_maps = hp.ma(total_maps)
mask60_total_maps.mask = np.logical_not(mask) #UNSEEN
mask60_total_map_E = hp.ma(total_map_E)
mask60_total_map_E.mask = np.logical_not(mask)
# (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Data (Anafast):
alm_total_ET_mask60_DNFS = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(DNFS_T[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_DNFS = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(DNFS_E[i],[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_DNFS = hp.sphtfunc.alm2map(alm_total_ET_mask60_DNFS, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_DNFS = hp.sphtfunc.alm2map(alm_total_TE_mask60_DNFS, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_DNFS = mask60_total_map_E - mask60_map_total_EcT_DNFS
mask60_map_total_TncE_DNFS = mask60_total_maps[0] - mask60_map_total_TcE_DNFS
resid_EcT_map_ideal_DNFS = ideal_EcT_map_masked - mask60_map_total_EcT_DNFS
resid_TcE_map_ideal_DNFS = ideal_TcE_map_masked - mask60_map_total_TcE_DNFS
resid_EncT_map_ideal_DNFS = ideal_EncT_map_masked - mask60_map_total_EncT_DNFS
resid_TncE_map_ideal_DNFS = ideal_TncE_map_masked - mask60_map_total_TncE_DNFS
resid_EcT_map_ideal_DNFS_masked = hp.ma(resid_EcT_map_ideal_DNFS)
resid_TcE_map_ideal_DNFS_masked = hp.ma(resid_TcE_map_ideal_DNFS)
resid_EncT_map_ideal_DNFS_masked = hp.ma(resid_EncT_map_ideal_DNFS)
resid_TncE_map_ideal_DNFS_masked = hp.ma(resid_TncE_map_ideal_DNFS)
resid_EcT_map_ideal_DNFS_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_DNFS_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_DNFS_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_DNFS_masked.mask = np.logical_not(mask)
dev_DNFS[i,0] = np.std(resid_EcT_map_ideal_DNFS_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_DNFS[i,1] = np.std(resid_TcE_map_ideal_DNFS_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_DNFS[i,2] = np.std(resid_EncT_map_ideal_DNFS_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_DNFS[i,3] = np.std(resid_TncE_map_ideal_DNFS_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Deviations/dev_DNFS.npy"), dev_DNFS)
# np.save(os.path.join(path,"Deviations/dev_DNSF.npy"), dev_DNSF)
dev_DAFS = np.load(os.path.join(path,"Deviations/dev_DAFS.npy"))
dev_DASF = np.load(os.path.join(path,"Deviations/dev_DASF.npy"))
dev_DNFS = np.load(os.path.join(path,"Deviations/dev_DNFS.npy"))
dev_DNSF = np.load(os.path.join(path,"Deviations/dev_DNSF.npy"))
dev_DAF = np.load(os.path.join(path,"Deviations/dev_DAF.npy"))
dev_DNF = np.load(os.path.join(path,"Deviations/dev_DNF.npy"))
col_names = ['EcT','TcE','EncT','TncE']
intervals_DAFS = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution DAFS filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_DAFS[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_DAFS[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DAFS[:,i],68,dev_max)
intervals_DAFS[0,i,0] = dev_max
intervals_DAFS[0,i,1] = inc_inf
intervals_DAFS[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DAFS[:,i],95,dev_max)
intervals_DAFS[1,i,0] = dev_max
intervals_DAFS[1,i,1] = inc_inf
intervals_DAFS[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_DASF = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution DASF filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_DASF[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_DASF[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DASF[:,i],68,dev_max)
intervals_DASF[0,i,0] = dev_max
intervals_DASF[0,i,1] = inc_inf
intervals_DASF[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DASF[:,i],95,dev_max)
intervals_DASF[1,i,0] = dev_max
intervals_DASF[1,i,1] = inc_inf
intervals_DASF[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_DNFS = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution DNFS filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_DNFS[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_DNFS[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DNFS[:,i],68,dev_max)
intervals_DNFS[0,i,0] = dev_max
intervals_DNFS[0,i,1] = inc_inf
intervals_DNFS[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DNFS[:,i],95,dev_max)
intervals_DNFS[1,i,0] = dev_max
intervals_DNFS[1,i,1] = inc_inf
intervals_DNFS[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_DNSF = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution DNSF filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_DNSF[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_DNSF[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DNSF[:,i],68,dev_max)
intervals_DNSF[0,i,0] = dev_max
intervals_DNSF[0,i,1] = inc_inf
intervals_DNSF[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DNSF[:,i],95,dev_max)
intervals_DNSF[1,i,0] = dev_max
intervals_DNSF[1,i,1] = inc_inf
intervals_DNSF[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
col_names = ['EcT','TcE','EncT','TncE']
intervals_SAFM = np.empty((2,4,3))
fig, axes = plt.subplots(1, 4, figsize=(30,6))
fig.suptitle('Dev distribution SAFM filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SAFM[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SAFM[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM[:,i],68,dev_max)
intervals_SAFM[0,i,0] = dev_max
intervals_SAFM[0,i,1] = inc_inf
intervals_SAFM[0,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM[:,i],95,dev_max)
intervals_SAFM[1,i,0] = dev_max
intervals_SAFM[1,i,1] = inc_inf
intervals_SAFM[1,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
int68_inf_data = (pd.DataFrame([intervals_DAFS[0,:,1],intervals_DASF[0,:,1],intervals_DNFS[0,:,1],intervals_DNSF[0,:,1]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_sup_data = (pd.DataFrame([intervals_DAFS[0,:,2],intervals_DASF[0,:,2],intervals_DNFS[0,:,2],intervals_DNSF[0,:,2]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int95_inf_data = (pd.DataFrame([intervals_DAFS[1,:,1],intervals_DASF[1,:,1],intervals_DNFS[1,:,1],intervals_DNSF[1,:,1]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int95_sup_data = (pd.DataFrame([intervals_DAFS[1,:,2],intervals_DASF[1,:,2],intervals_DNFS[1,:,2],intervals_DNSF[1,:,2]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
maxs_data = (pd.DataFrame([intervals_DAFS[0,:,0],intervals_DASF[0,:,0],intervals_DNFS[0,:,0],intervals_DNSF[0,:,0]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).style.\
apply(highlight_max).\
apply(highlight_min)
maxs_data
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| DAFS | 0.159769 | 0.284172 | 0.084812 | 0.161653 |
| DASF | 0.146832 | 0.335386 | 0.077923 | 0.186672 |
| DNFS | 0.164555 | 0.500636 | 0.088231 | 0.270078 |
| DNSF | 0.160165 | 0.421055 | 0.084946 | 0.230330 |
maxs_data = (pd.DataFrame([intervals_DAFS[0,:,0],intervals_DASF[0,:,0],intervals_DNFS[0,:,0],intervals_DNSF[0,:,0]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(5)
int68_data = pd.DataFrame(maxs_data.astype(str)+ "-" + (int68_inf_data.round(5)).astype(str) + "+" + (int68_sup_data.round(5)).astype(str),index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int68_data.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 68% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| DAFS | 0.15977-0.00586+0.00726 | 0.28417-0.03573+0.02124 | 0.08481-0.0033+0.00345 | 0.16165-0.01937+0.01501 |
| DASF | 0.14683-0.00803+0.00212 | 0.33539-0.05864+0.02304 | 0.07792-0.00431+0.00119 | 0.18667-0.03449+0.00875 |
| DNFS | 0.16455-0.09057+-0.04281 | 0.50064-0.60899+0.95648 | 0.08823-0.04773+0.02052 | 0.27008-0.34143+0.39475 |
| DNSF | 0.16016-0.01899+0.00368 | 0.42106-0.13181+0.06484 | 0.08495-0.01059+0.00186 | 0.23033-0.07498+0.03175 |
maxs_data = (pd.DataFrame([intervals_DAFS[0,:,0],intervals_DASF[0,:,0],intervals_DNFS[0,:,0],intervals_DNSF[0,:,0]],index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(2)
int95_data = pd.DataFrame(maxs_data.astype(str)+ "-" + (int95_inf_data.round(5)).astype(str) + "+" + (int95_sup_data.round(5)).astype(str),index = {'DAFS':0,'DASF':1,'DNFS':2,'DNSF':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int95_data.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 95% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| DAFS | 0.16-0.0095+0.0158 | 0.28-0.05712+0.04828 | 0.08-0.00527+0.00696 | 0.16-0.03384+0.02843 |
| DASF | 0.15-0.01125+0.00662 | 0.34-0.09484+0.06971 | 0.08-0.00619+0.00319 | 0.19-0.05839+0.03798 |
| DNFS | 0.16-0.09341+2.78131 | 0.5-0.65841+8.02011 | 0.09-0.04934+1.48742 | 0.27-0.37399+4.76554 |
| DNSF | 0.16-0.01952+0.01572 | 0.42-0.19571+0.33161 | 0.08-0.01072+0.00599 | 0.23-0.1098+0.13487 |
At last we generate maps including foregrounds and apply the smoothing method to perform the analaysis as previously shown. It is important to note that we do not have any model for describing the foregrounds.
aux = np.array([1, 10, 50, 100])
dev_DAFS_foreg_aux = np.zeros((nbmc, 4, 4),float)
# for i in np.arange(nbmc):
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
for a in np.arange(len(aux)):
maps_TQU = total_maps + foreg_maps_TQU_scaled * aux[a]
map_E = total_map_E + foreg_map_E_scaled * aux[a]
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(maps_TQU[0])
total_map_Q = hp.pixelfunc.remove_monopole(maps_TQU[1])
total_map_U = hp.pixelfunc.remove_monopole(maps_TQU[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(map_E)
total_map_E = hp.pixelfunc.remove_dipole(map_E)
#### (1) Enmascarar (sin apodizar)
mask60_maps_TQU = hp.ma(total_maps)
mask60_maps_TQU.mask = np.logical_not(mask) #UNSEEN
mask60_map_E = hp.ma(total_map_E)
mask60_map_E.mask = np.logical_not(mask)
cls_total_foreg_mask60_anafast = hp.sphtfunc.anafast(mask60_maps_TQU, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
#### (2) Filtrar
#Raw filters:
wT_anafast_foreg_raw = cls_total_foreg_mask60_anafast[3,2:]/cls_total_foreg_mask60_anafast[0,2:]
wE_anafast_foreg_raw = cls_total_foreg_mask60_anafast[3,2:]/cls_total_foreg_mask60_anafast[1,2:]
#Smooth method:
DAFS_foreg_T = smooth_filter(wT_anafast_foreg_raw, 0, 0)
DAFS_foreg_E = smooth_filter(wE_anafast_foreg_raw, 1, 0)
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_maps_TQU, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Best Data Method (Anafast):
alm_total_ET_mask60_DAFS_foreg = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(DAFS_foreg_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_DAFS_foreg = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(DAFS_foreg_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_DAFS_foreg = hp.sphtfunc.alm2map(alm_total_ET_mask60_DAFS_foreg, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_DAFS_foreg = hp.sphtfunc.alm2map(alm_total_TE_mask60_DAFS_foreg, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_DAFS_foreg = mask60_map_E - mask60_map_total_EcT_DAFS_foreg
mask60_map_total_TncE_DAFS_foreg = mask60_maps_TQU[0] - mask60_map_total_TcE_DAFS_foreg
#### (4) Mapas de reisiduos --> ideales-filtro ####
#Ideal correlated maps
hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
corr_th_total_maps = hdul_ideal[i+1].data
hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
resid_EcT_map_ideal_DAFS_foreg = ideal_EcT_map_masked - mask60_map_total_EcT_DAFS_foreg
resid_TcE_map_ideal_DAFS_foreg = ideal_TcE_map_masked - mask60_map_total_TcE_DAFS_foreg
resid_EncT_map_ideal_DAFS_foreg = ideal_EncT_map_masked - mask60_map_total_EncT_DAFS_foreg
resid_TncE_map_ideal_DAFS_foreg = ideal_TncE_map_masked - mask60_map_total_TncE_DAFS_foreg
resid_EcT_map_ideal_DAFS_foreg_masked = hp.ma(resid_EcT_map_ideal_DAFS_foreg)
resid_TcE_map_ideal_DAFS_foreg_masked = hp.ma(resid_TcE_map_ideal_DAFS_foreg)
resid_EncT_map_ideal_DAFS_foreg_masked = hp.ma(resid_EncT_map_ideal_DAFS_foreg)
resid_TncE_map_ideal_DAFS_foreg_masked = hp.ma(resid_TncE_map_ideal_DAFS_foreg)
resid_EcT_map_ideal_DAFS_foreg_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_DAFS_foreg_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_DAFS_foreg_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_DAFS_foreg_masked.mask = np.logical_not(mask)
dev_DAFS_foreg_aux[i,a,0] = np.std(resid_EcT_map_ideal_DAFS_foreg_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_DAFS_foreg_aux[i,a,1] = np.std(resid_TcE_map_ideal_DAFS_foreg_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_DAFS_foreg_aux[i,a,2] = np.std(resid_EncT_map_ideal_DAFS_foreg_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_DAFS_foreg_aux[i,a,3] = np.std(resid_TncE_map_ideal_DAFS_foreg_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Foregrounds/dev_DAFS_foreg_aux.npy"), dev_DAFS_foreg_aux)
dev_DAFS_foreg_aux = np.load(os.path.join(path,"Foregrounds/dev_DAFS_foreg_aux.npy"))
# dev_DAFS_foreg_aux[nbmc(x100),aux(x4),correlated(x4)]
col_names = ['EcT','TcE','EncT','TncE']
intervals_DAFS_foreg_aux = np.empty((2,4,4,3))
for a in np.arange(4):
fig, axes = plt.subplots(1, 4, figsize=(30,6))
# fig.suptitle('Dev distribution DAFS_foreg filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_DAFS_foreg_aux[:,a,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_DAFS_foreg_aux[:,a,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DAFS_foreg_aux[:,a,i],68,dev_max)
intervals_DAFS_foreg_aux[0,a,i,0] = dev_max
intervals_DAFS_foreg_aux[0,a,i,1] = inc_inf
intervals_DAFS_foreg_aux[0,a,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_DAFS_foreg_aux[:,a,i],95,dev_max)
intervals_DAFS_foreg_aux[1,a,i,0] = dev_max
intervals_DAFS_foreg_aux[1,a,i,1] = inc_inf
intervals_DAFS_foreg_aux[1,a,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
maxs_foreg_aux = (pd.DataFrame([intervals_DAFS_foreg_aux[0,0,:,0],intervals_DAFS_foreg_aux[0,1,:,0],intervals_DAFS_foreg_aux[0,2,:,0],intervals_DAFS_foreg_aux[0,3,:,0]],index = {'DAFS_foreg':0,'DAFS_foreg (x10)':1,'DAFS_foreg (x50)':2,'DAFS_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(5)
int68_inf = (pd.DataFrame([intervals_DAFS_foreg_aux[0,0,:,1],intervals_DAFS_foreg_aux[0,1,:,1],intervals_DAFS_foreg_aux[0,2,:,1],intervals_DAFS_foreg_aux[0,3,:,1]],index = {'DAFS_foreg':0,'DAFS_foreg (x10)':1,'DAFS_foreg (x50)':2,'DAFS_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_sup = (pd.DataFrame([intervals_DAFS_foreg_aux[0,0,:,2],intervals_DAFS_foreg_aux[0,1,:,2],intervals_DAFS_foreg_aux[0,2,:,2],intervals_DAFS_foreg_aux[0,3,:,2]],index = {'DAFS_foreg':0,'DAFS_foreg (x10)':1,'DAFS_foreg (x50)':2,'DAFS_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_foreg_aux = pd.DataFrame(maxs_foreg_aux.astype(str) + "-" + (int68_inf.round(5)).astype(str) + "+" + (int68_sup.round(5)).astype(str),index = {'DAFS_foreg':0,'DAFS_foreg (x10)':1,'DAFS_foreg (x50)':2,'DAFS_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int68_foreg_aux.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 68% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| DAFS_foreg | 0.15963-0.00649+0.00686 | 0.28427-0.04028+0.02079 | 0.0849-0.0035+0.0036 | 0.16292-0.01842+0.01688 |
| DAFS_foreg (x10) | 0.15786-0.00782+0.0046 | 0.30091-0.05324+0.0192 | 0.08835-0.00635+0.00251 | 0.17588-0.02615+0.01611 |
| DAFS_foreg (x50) | 0.15636-0.01085+0.00325 | 0.50686-0.04418+0.02618 | 0.18389-0.00939+0.00449 | 0.2837-0.02848+0.01248 |
| DAFS_foreg (x100) | 0.15694-0.01141+0.00276 | 0.58935-0.0423+0.01805 | 0.43889-0.00671+0.00543 | 0.33609-0.03978+0.01264 |
markers = ['o','*','^','s']
for i in np.arange(4):
plt.scatter(np.linspace(1,4,4),intervals_DAFS_foreg_aux[0,:,i,0], marker=markers[i],label=col_names[i])
plt.plot(np.linspace(1,4,10),interpolation(np.linspace(1,4,4),intervals_DAFS_foreg_aux[0,:,i,0],np.linspace(1,4,10)))
plt.legend()
aux = np.array([1, 10, 50, 100])
dev_SAFM_foreg_aux = np.zeros((nbmc, 4, 4),float)
# for i in np.arange(nbmc):
hdul_total = fits.open('files/Data/total_maps.fits', mode='readonly', memmap=True)
total_maps = hdul_total[i+1].data
hdul_total.close()
hdul_total_E = fits.open('files/Data/total_map_E.fits', mode='readonly', memmap=True)
total_map_E = hdul_total_E[i+1].data
hdul_total_E.close()
for a in np.arange(len(aux)):
maps_TQU = total_maps + foreg_maps_TQU_scaled * aux[a]
map_E = total_map_E + foreg_map_E_scaled * aux[a]
# (0) Eliminar monopolo y dipolo
total_map_T = hp.pixelfunc.remove_monopole(maps_TQU[0])
total_map_Q = hp.pixelfunc.remove_monopole(maps_TQU[1])
total_map_U = hp.pixelfunc.remove_monopole(maps_TQU[2])
total_map_T = hp.pixelfunc.remove_monopole(total_map_T)
total_map_Q = hp.pixelfunc.remove_monopole(total_map_Q)
total_map_U = hp.pixelfunc.remove_monopole(total_map_U)
total_maps = [total_map_T, total_map_Q, total_map_U]
total_map_E = hp.pixelfunc.remove_monopole(map_E)
total_map_E = hp.pixelfunc.remove_dipole(map_E)
#### (1) Enmascarar (sin apodizar)
mask60_maps_TQU = hp.ma(total_maps)
mask60_maps_TQU.mask = np.logical_not(mask) #UNSEEN
mask60_map_E = hp.ma(total_map_E)
mask60_map_E.mask = np.logical_not(mask)
cls_total_foreg_mask60_anafast = hp.sphtfunc.anafast(mask60_maps_TQU, nspec=None, lmax=lmax, iter=3, alm=False, pol=True)
#### (2) Filtrar
alm_total_mask60 = hp.sphtfunc.map2alm(mask60_maps_TQU, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
# Best Data Method (Anafast):
alm_total_ET_mask60_SAFM_foreg = hp.sphtfunc.smoothalm(alm_total_mask60[0], beam_window=np.insert(SAFM_T,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_mask60_SAFM_foreg = hp.sphtfunc.smoothalm(alm_total_mask60[1], beam_window=np.insert(SAFM_E,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
#### (3) Mapas correlacionados y no correlacionados ####
mask60_map_total_EcT_SAFM_foreg = hp.sphtfunc.alm2map(alm_total_ET_mask60_SAFM_foreg, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_TcE_SAFM_foreg = hp.sphtfunc.alm2map(alm_total_TE_mask60_SAFM_foreg, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
mask60_map_total_EncT_SAFM_foreg = mask60_map_E - mask60_map_total_EcT_SAFM_foreg
mask60_map_total_TncE_SAFM_foreg = mask60_maps_TQU[0] - mask60_map_total_TcE_SAFM_foreg
#### (4) Mapas de reisiduos --> ideales-filtro ####
#Ideal correlated maps
hdul_ideal = fits.open('files/Correlated/corr_th_total_maps.fits', mode='readonly', memmap=True)
corr_th_total_maps = hdul_ideal[i+1].data
hdul_ideal.close()
ideal_EcT_map_masked = hp.ma(corr_th_total_maps['EcT'])
ideal_TcE_map_masked = hp.ma(corr_th_total_maps['TcE'])
ideal_EncT_map_masked = hp.ma(corr_th_total_maps['EncT'])
ideal_TncE_map_masked = hp.ma(corr_th_total_maps['TncE'])
ideal_EcT_map_masked.mask = np.logical_not(mask)
ideal_TcE_map_masked.mask = np.logical_not(mask)
ideal_EncT_map_masked.mask = np.logical_not(mask)
ideal_TncE_map_masked.mask = np.logical_not(mask)
resid_EcT_map_ideal_SAFM_foreg = ideal_EcT_map_masked - mask60_map_total_EcT_SAFM_foreg
resid_TcE_map_ideal_SAFM_foreg = ideal_TcE_map_masked - mask60_map_total_TcE_SAFM_foreg
resid_EncT_map_ideal_SAFM_foreg = ideal_EncT_map_masked - mask60_map_total_EncT_SAFM_foreg
resid_TncE_map_ideal_SAFM_foreg = ideal_TncE_map_masked - mask60_map_total_TncE_SAFM_foreg
resid_EcT_map_ideal_SAFM_foreg_masked = hp.ma(resid_EcT_map_ideal_SAFM_foreg)
resid_TcE_map_ideal_SAFM_foreg_masked = hp.ma(resid_TcE_map_ideal_SAFM_foreg)
resid_EncT_map_ideal_SAFM_foreg_masked = hp.ma(resid_EncT_map_ideal_SAFM_foreg)
resid_TncE_map_ideal_SAFM_foreg_masked = hp.ma(resid_TncE_map_ideal_SAFM_foreg)
resid_EcT_map_ideal_SAFM_foreg_masked.mask = np.logical_not(mask)
resid_TcE_map_ideal_SAFM_foreg_masked.mask = np.logical_not(mask)
resid_EncT_map_ideal_SAFM_foreg_masked.mask = np.logical_not(mask)
resid_TncE_map_ideal_SAFM_foreg_masked.mask = np.logical_not(mask)
dev_SAFM_foreg_aux[i,a,0] = np.std(resid_EcT_map_ideal_SAFM_foreg_masked.compressed())/np.std(ideal_EcT_map_masked.compressed())
dev_SAFM_foreg_aux[i,a,1] = np.std(resid_TcE_map_ideal_SAFM_foreg_masked.compressed())/np.std(ideal_TcE_map_masked.compressed())
dev_SAFM_foreg_aux[i,a,2] = np.std(resid_EncT_map_ideal_SAFM_foreg_masked.compressed())/np.std(ideal_EncT_map_masked.compressed())
dev_SAFM_foreg_aux[i,a,3] = np.std(resid_TncE_map_ideal_SAFM_foreg_masked.compressed())/np.std(ideal_TncE_map_masked.compressed())
print("%.2f %% completado"%(100*(i+1)/nbmc))
print('******************************************************')
# np.save(os.path.join(path,"Foregrounds/dev_SAFM_foreg_aux.npy"), dev_SAFM_foreg_aux)
dev_SAFM_foreg_aux = np.load(os.path.join(path,"Foregrounds/dev_SAFM_foreg_aux.npy"))
# dev_SAFM_foreg_aux[nbmc(x100),aux(x4),correlated(x4)]
col_names = ['EcT','TcE','EncT','TncE']
intervals_SAFM_foreg_aux = np.empty((2,4,4,3))
for a in np.arange(4):
fig, axes = plt.subplots(1, 4, figsize=(30,6))
# fig.suptitle('Dev distribution SAFM_foreg filter')
for i in np.arange(4):
axes[i].hist(np.array(dev_SAFM_foreg_aux[:,a,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(dev_SAFM_foreg_aux[:,a,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
dev_max = x[y.argmax()]
axes[i].vlines(dev_max,0,ymax=y_max, color = 'blue')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM_foreg_aux[:,a,i],68,dev_max)
intervals_SAFM_foreg_aux[0,a,i,0] = dev_max
intervals_SAFM_foreg_aux[0,a,i,1] = inc_inf
intervals_SAFM_foreg_aux[0,a,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='green')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='green', label='68%')
sorted_array, err_inf, err_sup, inc_inf, inc_sup = errors_bars(dev_SAFM_foreg_aux[:,a,i],95,dev_max)
intervals_SAFM_foreg_aux[1,a,i,0] = dev_max
intervals_SAFM_foreg_aux[1,a,i,1] = inc_inf
intervals_SAFM_foreg_aux[1,a,i,2] = inc_sup
axes[i].vlines(sorted_array[err_inf[1]],0,ymax=y_max, linestyle = 'dashed', color='orange')
axes[i].vlines(sorted_array[err_sup[1]],0,ymax=y_max, linestyle = 'dashed', color='orange', label='95%')
axes[i].legend()
axes[i].set_title(col_names[i])
maxs_foreg_aux = (pd.DataFrame([intervals_SAFM_foreg_aux[0,0,:,0],intervals_SAFM_foreg_aux[0,1,:,0],intervals_SAFM_foreg_aux[0,2,:,0],intervals_SAFM_foreg_aux[0,3,:,0]],index = {'SAFM_foreg':0,'SAFM_foreg (x10)':1,'SAFM_foreg (x50)':2,'SAFM_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})).round(5)
int68_inf = (pd.DataFrame([intervals_SAFM_foreg_aux[0,0,:,1],intervals_SAFM_foreg_aux[0,1,:,1],intervals_SAFM_foreg_aux[0,2,:,1],intervals_SAFM_foreg_aux[0,3,:,1]],index = {'SAFM_foreg':0,'SAFM_foreg (x10)':1,'SAFM_foreg (x50)':2,'SAFM_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_sup = (pd.DataFrame([intervals_SAFM_foreg_aux[0,0,:,2],intervals_SAFM_foreg_aux[0,1,:,2],intervals_SAFM_foreg_aux[0,2,:,2],intervals_SAFM_foreg_aux[0,3,:,2]],index = {'SAFM_foreg':0,'SAFM_foreg (x10)':1,'SAFM_foreg (x50)':2,'SAFM_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3}))
int68_foreg_aux = pd.DataFrame(maxs_foreg_aux.astype(str) + "-" + (int68_inf.round(5)).astype(str) + "+" + (int68_sup.round(5)).astype(str),index = {'SAFM_foreg':0,'SAFM_foreg (x10)':1,'SAFM_foreg (x50)':2,'SAFM_foreg (x100)':3},columns={'EcT':0,'TcE':1,'EncT':2,'TncE':3})
int68_foreg_aux.style.set_table_attributes("style='display:inline'").set_caption('Dispersion of the resiudal maps with 68% confidendence intervals')
| EcT | TcE | EncT | TncE | |
|---|---|---|---|---|
| SAFM_foreg | 0.13771-0.00609+0.00105 | 0.24126-0.02925+0.01175 | 0.07339-0.00255+0.00077 | 0.13438-0.02049+0.00842 |
| SAFM_foreg (x10) | 0.13767-0.00613+0.00091 | 0.26056-0.06383+0.01091 | 0.07997-0.00133+0.00265 | 0.1449-0.04118+0.00811 |
| SAFM_foreg (x50) | 0.13823-0.00633+0.00108 | 0.70769-0.07601+0.03694 | 0.17893-0.00227+0.00346 | 0.39678-0.05461+0.02271 |
| SAFM_foreg (x100) | 0.14149-0.00325+0.00165 | 1.77317-0.10809+0.04038 | 0.43475-0.00224+0.00357 | 1.02711-0.03444+0.05833 |
markers = ['o','*','^','s']
for i in np.arange(4):
plt.scatter(np.linspace(1,4,4),intervals_SAFM_foreg_aux[0,:,i,0], marker=markers[i],label=col_names[i])
plt.plot(np.linspace(1,4,10),interpolation(np.linspace(1,4,4),intervals_SAFM_foreg_aux[0,:,i,0],np.linspace(1,4,10)))
# plt.plot(np.arange(4),intervals_DAFS_foreg_aux[0,:,i,0])
plt.legend()
Dls_Planck_TT = np.insert(np.transpose(np.loadtxt(os.path.join(path,"Data/COM_PowerSpect_CMB-TT-full_R3.01.txt"))),[0,0],0,axis=1) #empieza en l=2
Dls_Planck_TE = np.insert(np.transpose(np.loadtxt(os.path.join(path,"Data/COM_PowerSpect_CMB-TE-full_R3.01.txt"))),[0,0],0,axis=1)
ell_Planck_TT = np.arange(Dls_Planck_TT.shape[1])
ell_Planck_TE = np.arange(Dls_Planck_TE.shape[1])
Cls_Planck_TT = Dls_Planck_TT * (2 * np.pi) / (ell_Planck_TT*(ell_Planck_TT+1))
Cls_Planck_TE = Dls_Planck_TE * (2 * np.pi) / (ell_Planck_TE*(ell_Planck_TE+1))
## LiteBIRD ##
npix = hp.nside2npix(nside)
## Noise -> realizacion gausiana de varianza conocida
# 2.6 muK arcmin -> sensibilidad en un pixel de nside = 512 (muK)
sigma_T = (2.6/np.sqrt(2)) / (Anside*(180*60/np.pi))
sigma_P = 2.6 / (Anside*(180*60/np.pi))
noise_map_T = np.random.normal(0,sigma_T,npix)
noise_map_Q = np.random.normal(0,sigma_P,npix)
noise_map_U = np.random.normal(0,sigma_P,npix)
noise_maps = np.array([noise_map_T, noise_map_Q, noise_map_U], np.float64)
As introduced in the introduction, a surprisingly small quadrupole moment was already found with the COBE data. Posterior missions, latter confirmed this detection and extended towards larger values of $\ell$ (WMAP and PLanck), and it is usually known as the "lack of power at large-scale" anomaly. In the multipole range $\ell \sim (0-30)$ we can notice that the observations are systematically below the model (see Figure 3. There are several premises that could explain the origin of this anomaly. The most interesting one would be that this anomaly has a primordial origin which could imply new early Universe physics. Another explanation would be that this anomaly could be a simple statistical fluctuation. We are going to make use of the method developed in previous sections in order to try to elucidate the origin of this anomaly. For that purpose, we perform a set of simulations including the lack of power anomaly in temperature maps as measured by Planck. On the other hand, we analyse different scenarios of the $E$-mode where we introduce an anomalous lack of power in its angular power spectrum. With these full-sky realizations, that are considered from now on as our observations, we obtain the correlated and uncorrelated maps by applying the theoretical filter.
Since $\Lambda$CDM only provides a model of the CMB angular power spectra, we need to quantify the goodness of the fit of our observations with respect to $\Lambda$CDM. We are going to test how probable it is to obtain our observations assuming $\Lambda$CDM. We will compute the p-value of our observations, meaning a value close to 0 or 1 an anomalous result.
Our purpose is to characterize the significance of the lack of power anomaly, which describes the systematic reduction of the power in the multipole range $\ell \sim (0-30)$. We compare our simulated observation with a set of 1000 simulations generated using $\Lambda$CDM model. The observation is simulated with \verb+healpy+ for which it is necessary to express the $a_{\ell m}$'s coefficients of the maps in terms of the power spectra. As we have Planck data available for the temperature angular power spectrum and $T$-$E$ cross-angular power spectrum, we use them to include the anomaly in our simulation mimicking the observed behaviour. We rename the measured Planck angular power spectra as $C_{\ell}^{\text{Plank}} \equiv \hat{C}_{\ell} $ and the predicted by $\Lambda$CDM model as $C_{\ell}^{\text{CAMB}} \equiv C_{\ell}$. With this we have the following spherical harmonic coefficients:
$\begin{equation} t_{\ell m} = \eta_1 \sqrt{\hat{C}_{\ell}^{TT}}, \quad e_{\ell m} = \eta_1 \frac{\hat{C}_{\ell}^{TE}}{\sqrt{\hat{C}_{\ell}^{TT}}} + \eta_2 D_{\ell}, \quad b_{\ell m} = \eta_3 \sqrt{C_{\ell}^{BB}}, \tag{4.1} \end{equation}$
where
$\begin{equation} \eta_1, \eta_2, \eta_3 \in \mathbb{C} : \left\{ \begin{array}{rcl} \sqrt{2} \text{Re}(\eta_{j}) \sim \mathcal{N}(0,1) \\ \sqrt{2}\text{Im}(\eta_{j}) \sim \mathcal{N}(0,1) \end{array} \right. . \tag{4.2} \end{equation}$
and $\hat{C}_{\ell}^{TE}/\sqrt{\hat{C}_{\ell}^{TT}}$ and $D_{\ell}$ are the correlated and uncorrelated parts of the $E$-mode with the temperature respectively. To be consistent with $\Lambda$CDM model we need to compute the $E$-mode polarization spectrum defining $D_{\ell}$ as:
$\begin{equation} D_{\ell} = \sqrt{C_{\ell}^{EE} - \frac{(C_{\ell}^{TE})^2}{C_{\ell}^{TT}}}\ . \tag{4.3} \end{equation}$
eta_1 = hp.sphtfunc.almxfl(hp.sphtfunc.synalm(Cls_CAMB[0,:lmax+1], lmax=lmax, mmax=None, new=True, verbose=True), Cls_CAMB[0,:lmax+1]**(-1/2))
eta_1[np.isnan(eta_1)] = 0
eta_2 = hp.sphtfunc.almxfl(hp.sphtfunc.synalm(Cls_CAMB[1,:lmax+1], lmax=lmax, mmax=None, new=True, verbose=True), Cls_CAMB[1,:lmax+1]**(-1/2))
eta_2[np.isnan(eta_2)] = 0
# For reproducing the same result I save a pair of etas
eta_1 = np.load(os.path.join(path,"Low_variance/eta_1.npy"))
eta_2 = np.load(os.path.join(path,"Low_variance/eta_2.npy"))
tlm = hp.sphtfunc.almxfl(eta_1, (Cls_Planck_TT[1,:lmax+1])**(1/2))
tlm[np.isnan(tlm)] = 0
elm_cT = hp.sphtfunc.almxfl(eta_1, Cls_Planck_TE[1,:lmax+1]*(Cls_Planck_TT[1,:lmax+1])**(-1/2))
elm_cT[np.isnan(elm_cT)] = 0
Dl = np.sqrt( Cls_CAMB[1,:lmax+1] - (Cls_CAMB[3,:lmax+1]**2/Cls_CAMB[0,:lmax+1]) )
blm = hp.synalm(Cls_CAMB[2,:lmax+1], lmax=lmax)
These $a_{\ell m}$'s and $\eta_j$ coefficients satisfy that:
$\begin{align} t_{\ell m}^* = (-1)^m t_{\ell m},\ e_{\ell m}^* = (-1)^m e_{\ell m},\ b_{\ell m}^* = (-1)^m b_{\ell m}, \tag{4.4} \\ \langle{\eta_j \eta_j^*}\rangle = \langle{\left(\eta_j^r + i \eta_j^i\right)\left(\eta_j^r - i \eta_j^i\right)}\rangle = \langle{\eta_j^r \eta_j^r}\rangle + \langle{\eta_j^i \eta_j^i}\rangle = \frac{1}{2} + \frac{1}{2} = 1, \tag{4.5} \end{align}$
and so the power spectra is obtained as expected:
$\begin{align} \langle{t_{\ell m} t_{\ell m}^*}\rangle = \hat{C}_{\ell}^{TT}, \tag{4.6} \\ \langle{e_{\ell m} e_{\ell m}^*}\rangle = \frac{(\hat{C}_{\ell}^{TE})^2}{\hat{C}_{\ell}^{TT}} + D_{\ell}^2 = \tilde{C}_{\ell}^{EE}, \tag{4.7} \\ \langle{b_{\ell m} b_{\ell m}^*}\rangle = C_{\ell}^{BB}.\tag{4.8} \end{align}$
With these definitions we modify the $E$-mode polarization spectrum using a factor to decrease its power at low multipoles. For this we define a parameter, $\alpha \in \left[0.1, 1\right]$.
Therefore, this leads to a redefinition of the $D_{\ell}$ uncorrelated part of the $E$-mode power spectrum for each value of this parameter as:
$\begin{equation} D^2_{\ell}(\alpha) = \left\{ \begin{array}{lll} \alpha D_{\ell}^2 \quad , \quad \ell \leq 30 \\ D_{\ell}^2 \quad , \quad \ell > 30\\ \end{array} \right. \label{eq:Dl} \tag{4.9} \end{equation}$
where $\alpha=1$ represents that the $E$-mode angular power spectrum remains compatible with $\Lambda$CDM. When $\alpha$ values are smaller than unity, this induces a lack of power in the $E$-mode angular power spectrum.
Once we have the observed $TQU$ maps simulated we can compute the temperature/$E$-mode polarization correlated and uncorrelated maps with $E$-mode polarization/temperature. This leads to six maps ($T$, $E$, $EcT$, $TcE$, $EncT$ and $TncE$) we need to analyse. We compute the variance of these maps and compare them with the variances distribution given by the set of 1000 simulations. We represent the histograms obtained for these maps together with $\alpha=1$, $\alpha = 0.1$ observations.
alpha = np.linspace(0.1,1,20)
var_low_var = np.zeros((len(alpha),6),float)
for i in np.arange(len(alpha)):
Dl_mod = np.copy(Dl)
Dl_mod[:30] = np.sqrt(alpha[i])*Dl[:30]
elm_ncT = hp.sphtfunc.almxfl(eta_2, Dl_mod)
elm_ncT[np.isnan(elm_ncT)] = 0
elm = elm_cT + elm_ncT
elm[np.isnan(elm)] = 0
# Maps with different anomaly level
maps_low_var = hp.sphtfunc.alm2map([tlm,elm,blm], nside, lmax=lmax, pixwin=True, fwhm=fwhm, pol=True)
total_TQU_low_var = maps_low_var + noise_maps
alms_total_low_var = hp.sphtfunc.map2alm(total_TQU_low_var, lmax=lmax, pol=True)
total_E_low_var = hp.sphtfunc.alm2map(alms_total_low_var[1], nside, lmax=lmax, pixwin=False, fwhm=0.0,pol=True)
alm_low_var_ET_th = hp.sphtfunc.smoothalm(alms_total_low_var[0], beam_window=np.insert(WT_th_noise,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_low_var_TE_th = hp.sphtfunc.smoothalm(alms_total_low_var[1], beam_window=np.insert(WE_th_noise,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
# Correlated and uncorrelated maps
map_low_var_EcT_th = hp.sphtfunc.alm2map(alm_low_var_ET_th, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_low_var_TcE_th = hp.sphtfunc.alm2map(alm_low_var_TE_th, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_low_var_EncT_th = total_E_low_var - map_low_var_EcT_th
map_low_var_TncE_th = total_TQU_low_var[0] - map_low_var_TcE_th
var_low_var[i,0] = (np.std(total_TQU_low_var[0]))**2
var_low_var[i,1] = (np.std(total_E_low_var))**2
var_low_var[i,2] = (np.std(map_low_var_EcT_th))**2
var_low_var[i,3] = (np.std(map_low_var_TcE_th))**2
var_low_var[i,4] = (np.std(map_low_var_EncT_th))**2
var_low_var[i,5] = (np.std(map_low_var_TncE_th))**2
print("%.2f %% completado"%(100*(i+1)/len(alpha)))
print('******************************************************')
# np.save(os.path.join(path,"Low_variance/var_low_var.npy"), var_low_var)
var_low_var = np.load(os.path.join(path,"Low_variance/var_low_var.npy"))
##########################################################################
### 1000 simulations to have a LambdaCDM distribution of the variances ###
##########################################################################
var_ideal_1000 = np.zeros((1000,6),float)
## LiteBIRD
# 2.6 muK arcmin -> sensibilidad en un pixel de nside = 512 (muK)
sigma_T = (2.6/np.sqrt(2)) / (Anside*(180*60/np.pi))
sigma_P = 2.6 / (Anside*(180*60/np.pi))
# for i in np.arange(1000):
np.random.randint(low=0, high=nbmc, size=1)
alm_cmb = hp.sphtfunc.synalm(Cls_CAMB, lmax=lmax, mmax=None, new=True, verbose=True) #TEB
maps_TQU = hp.sphtfunc.alm2map(alm_cmb, nside, lmax=None, mmax=None, pixwin=True, fwhm=fwhm, sigma=None, pol=True, inplace=False, verbose=True) #TQU
noise_map_T = np.random.normal(0,sigma_T,npix)
noise_map_Q = np.random.normal(0,sigma_P,npix)
noise_map_U = np.random.normal(0,sigma_P,npix)
noise_maps = np.array([noise_map_T, noise_map_Q, noise_map_U], np.float64)
total_maps = maps_TQU + noise_maps #TQU ##pixwin+fwhm##
alm_total = hp.sphtfunc.map2alm(total_maps, lmax=lmax, mmax=None, pol=True, verbose=True) #TEB ##pixwin+fwhm##
total_map_E = hp.sphtfunc.alm2map(alm_total[1], nside, lmax=None, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm##
# (1) Mapas correlacionados y no correlacionados:
# Datos con anomalía filtrar con th_filter
alm_T = hp.sphtfunc.map2alm(total_maps[0], lmax=lmax, mmax=None, pol=False, verbose=True)
alm_E = hp.sphtfunc.map2alm(total_map_E, lmax=lmax, mmax=None, pol=False, verbose=True)
alm_total_ET_th = hp.sphtfunc.smoothalm(alm_T, beam_window=np.insert(WT_th_noise,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
alm_total_TE_th = hp.sphtfunc.smoothalm(alm_E, beam_window=np.insert(WE_th_noise,[0,0],0), pol=False, mmax=None, verbose=True, inplace=True)
# Mapas correlacionados y no correlacionados
map_total_EcT_th = hp.sphtfunc.alm2map(alm_total_ET_th, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_total_TcE_th = hp.sphtfunc.alm2map(alm_total_TE_th, nside, lmax=lmax, mmax=None, pixwin=False, fwhm=0.0, sigma=None, pol=False, inplace=False, verbose=True) ##pixwin+fwhm en el alm##
map_total_EncT_th = total_map_E - map_total_EcT_th
map_total_TncE_th = total_maps[0] - map_total_TcE_th
# (2) Varianza 100 simulaciones:
var_ideal_1000[i,0] = np.std(total_maps[0])**2
var_ideal_1000[i,1] = np.std(total_map_E)**2
var_ideal_1000[i,2] = np.std(map_total_EcT_th)**2
var_ideal_1000[i,3] = np.std(map_total_TcE_th)**2
var_ideal_1000[i,4] = np.std(map_total_EncT_th)**2
var_ideal_1000[i,5] = np.std(map_total_TncE_th)**2
print("%.2f %% completado"%(100*(i+1)/1000))
print('******************************************************')
# np.save(os.path.join(path,"Low_variance/var_ideal_1000.npy"), var_ideal_1000)
var_ideal_1000 = np.load(os.path.join(path,"Low_variance/var_ideal_1000.npy"))
As the dependence with $\alpha$ is only included in the $C_{\ell}^{EE}$ spectrum, we notice that the variance of the $T$ and $EcT$ are independent of the parameter. We observe that after applying the methodology developed during the work to obtain the correlated and uncorrelated maps, the p-value of the $TcE$ and $EncT$ shows that the significance of the anomaly detection is improved in comparison with the original $T$ and $E$ maps. These two maps have angular power spectra proportional to $C_{\ell}^{EE}$ and so here we can notice the importance of future missions like LiteBIRD, which will retrieve an $E$ map with uncertainties on the cosmic variance limit.
def pvalue(array, observ):
sorted_array = np.sort(array,axis=0)
area_inf = np.count_nonzero(sorted_array < observ, axis=0)
return area_inf/len(array)
col_names=['T','E','EcT','TcE','EncT','TncE']
pvalue_k0 = np.empty((6))
fig, axes = plt.subplots(1, 6, figsize=(40,6))
for i in np.arange(6):
axes[i].hist(np.array(var_ideal_1000[:,i]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(var_ideal_1000[:,i]),ax=axes[i])
kde_curve = axes[i].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
var_max = x[y.argmax()]
axes[i].axvline(x=var_low_var[0,i],label=r'$\alpha = 0.1$')
axes[i].axvline(x=var_low_var[-1,i], color='red',linestyle='dashed',label=r'$\alpha = 1$')
axes[i].legend(loc='upper right')
axes[i].set_title(col_names[i])
pvalue_k = np.empty((len(alpha),6))
for i in np.arange(len(alpha)):
for j in np.arange(6):
# fig, axes = plt.subplots(1, 6, figsize=(20,4))
axes[j].hist(np.array(var_ideal_1000[:,j]), color='teal', alpha = 0.6, density=True)
sns.kdeplot(np.array(var_ideal_1000[:,j]),ax=axes[j])
kde_curve = axes[j].lines[0]
x = kde_curve.get_xdata()
y = kde_curve.get_ydata()
y_max = y.max()
var_max = x[y.argmax()]
pvalue_k[i,j] = pvalue(var_ideal_1000[:,j],var_low_var[i,j])
fig, ax = plt.subplots(1,1, figsize = (10,7))
for i in np.arange(6):
plt.scatter(alpha,pvalue_k[:,i],label=col_names[i])
plt.plot(alpha,pvalue_k[:,i])
plt.legend(loc='upper right')
plt.xlabel(r'$\alpha$')
plt.ylabel(r'p-value')
ax.invert_xaxis()
We can see the change on the p-value depending on the level of anomaly included in the $E$-mode. Recalling that $\alpha=1$ refers to the situation where the anomaly is only included in the temperature maps as measured by Planck, we can see that as we include the lack of power in the $E$-mode maps the variances of the maps change. If we focus on the $TcE$ map, we have an anomalous variance for $\alpha=1$ but greater than the expected from $\Lambda$CDM, and as the anomaly in $E$-mode is included this variance becomes smaller being more compatible with the standard cosmological prediction ($\alpha \sim 0.5$). From this point the value of the variance continues decreasing, being more anomalous with respect to the theoretical prediction.
During this work we have developed a methodology based on Wiener filtering to obtain the correlated and uncorrelated maps of both temperature and $E$-mode polarization. Depending on the level of contaminants present in the simulations we had found different techniques to calculate the optimal Wiener filters. We have also analysed the difference between defining the filter from simulations, where a cosmological model is assumed, or from observations.
For the simulation approach, we have seen that there are different possibilities for the filter definition when a mask is included and that their effects are relevant in the filter construction. To know which filter recovers the optimal signal, we computed the relative dispersions, with respect to the ideal maps, of the difference residual maps. This leads us to state that there is no noticeable difference between computing the average before or after the $\verb+anafast+$ filters are defined but for $\verb+NaMaster+$ filters there is a clear disadvantage when the mean angular power spectrum is calculated before the filter definition. Overall, the best filter is chosen to be the SAMF, which is computed from simulations (S), the angular power spectra is obtained with $\verb+anafast+$ (A), then the mean value of the 100 $C_{\ell}$ is performed (M) and finally we defined the filter (F).
For the data case, we had found a method for smoothing the angular power spectra of one realization that will be considered as our observation. In a similar way as in the simulation case, we have four different possibilities for the filter definition. We have compared the results with the relative dispersions of the difference residual maps and we got that the best filter is DAFS, computed from a single realization or data (D), the angular power spectra is obtained with $\verb+anafast+$ (A), then we define the filter (F) and finally we smooth it (S).
In section we included the foreground residuals in the analysis bearing in mind that we do not have a model to describe their emission. As we have also mentioned, we have obtained foreground residual contributions according to [Errad, 2016], [Diego-Palazuelos, 2020], which are negligible in comparison with the noise level. This allows us to analyse these CMB maps with the SAMF filter which did not consider any foreground contribution. Moreover, we obtained a filter like in the data case trying to compare the real influence of these foreground residuals in the filter definition (DAFS_foreg). As expected, we found better results for the simulated approach when the foreground residual level is negligible. To delve further into the importance of drawing conclusions from data we increased the foreground residual contribution by the following factors: $\{10, 50, 100\}$, to see whether there is a difference between the filter from simulations of the one derived from the data. The importance of having a method to calculate a filter from the data is remarkable for the temperature correlated and uncorrelated maps. We observed that when the residual foregrounds contribution is not negligible we can no longer use the filter obtained from simulations as we lack a reliable model of the residual foregrounds angular power spectrum. This degeneration is softened with the data strategy. For the $E$-mode correlated and uncorrelated maps both procedures return similar results.
Finally, we have applied the developed methodology to simulated maps with lack of power at low multipoles where we analysed whether more statistically significant conclusions can be drawn from the correlated and uncorrelated maps than from the raw maps. We generated 1000 simulations to obtain a distribution of the $\Lambda$CDM $T$, $E$, $TcE$, $TncE$, $EcT$ and $EncT$ variance and we used them to test if our observation is compatible with $\Lambda$CDM. We computed the the p-value to determine if we would detect the anomaly with higher significance in the correlated and uncorrelated maps. In these simulated observations we included the observed lack of power anomaly, only in temperature as measured by Planck and studied different $E$-mode cases where we introduced the anomaly by decreasing the power in the low multipole range using an amplitude parameter $\alpha$. We have seen that a more anomalous result is obtained for $TcE$ and $EncT$ in comparisson with the raw $T$ and $E$ ones, and so this methology could help us detecting the lack of power anomaly with a higher significance in future missions.
[Alsonso, 2019] D. Alonso, J. Sanchez, and A. Slosar, A unified pseudo-Cl framework, Monthly Notices of the Royal Astronomical Society, vol. 484, p. 4127–4151, (2019).
[Bennet, 2003] C. L. Bennett, M. Halpern, G. Hinshaw, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, L. Page,D. N. Spergel, G. S. Tucker, and et al., First-Year Wilkinson Microwave AnisotropyProbe (WMAP) Observations: Preliminary Maps and Basic Results, The Astrophysical Journal Supplement Series, no. 1, (2003).
[Challinor, 2012] A. Challinor, CMB anisotropy science: a review, Proceedings of the International Astronomical Union, (2012).
[Delabrouille, 2013] J. Delabrouille, M. Betoule, J.-B. Melin, M.-A. Miville-Deschênes, J. Gonzalez-Nuevo, M. Le Jeune, G. Castex, G. de Zotti, S. Basak, M. Ashdown, and et al., The pre-launch Planck SkyModel: a model of sky emission at submillimetre to centimetre wavelengths, Astronomy & Astrophysics, (2013).
[Dicke, 1965] Dicke et. al., Cosmic Black-Body Radiation. (1965).
[Diego-Palazuelos, 2020] P. Diego-Palazuelos, P. Vielva, E. Martínez-González, and R. Barreiro, Comparison of de-lensing methodologies and assessment of the delensing capabilities of future experiments, Journal of Cosmology and Astroparticle Physics, (2020).
[Dodelson, 2003] S. Dodelson, Modern Cosmology. Amsterdam: Academic Press, (2003).
[Errad, 2016] J. Errard, S. M. Feeney, H. V. Peiris, and A. H. Jaffe, Robust forecasts on fundamental physics from the foreground-obscured, gravitationally-lensed CMB polarization, Journal of Cosmology and Astroparticle Physics, (2016).
[Errad, 2019] J. Errard and R. Stompor, Characterizing bias on large scale CMB B-modes after Galactic foregrounds cleaning, Physical Review D, Feb (2019).
[Fixsen, 2009] D. J. Fixsen, The temperature of the Cosmic Microwave Background, The Astrophysical Journal, (2009).
[Fromert and Ensslin, 2009] M. Frommert and T. A. Ensslin, Ironing out primordial temperature fluctuations with polarisation: optimal detection of cosmic structure imprints, (2009).
[Gorski, 2005] K. M. Gorski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M. Bartelmann, HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere, The Astrophysical Journal, no. 2, (2005).
[Hamuzi, 2019] M. Hazumi et. al., LiteBIRD: A Satellite for the Studies of B-Mode Polarization and Inflation from Cosmic Background Radiation Detection, J.Low Temp.Phys., vol. 194,no. 5-6, pp. 443–452, (2019).
[Hazumi, 2020] Hazumi, Masashi et. al, LiteBIRD satellite: JAXA’s new strategic L-class mission forall-sky surveys of cosmic microwave background polarization, Space Telescopes andInstrumentation 2020: Optical, Infrared, and Millimeter Wave, (2020).
[Ichiki, 2014] Kiyotomo Ichiki, CMB foreground: A concise review, Progress of Theoretical and Experimental Physics, (2014).
[Jeong, 2020] D. Jeong and M. Kamionkowski, Gravitational Waves, CMB Polarization, and the Hubble Tension, Physical Review Letters, (2020).
[Kamionkowski, 1997] M. Kamionkowski, A. Kosowsky, and A. Stebbins, Statistics of cosmic microwave background polarization, Physical Review D, vol. 55, no. 12, (1997).
[Kamionkowski, 2016] M. Kamionkowski and E. D. Kovetz, The Quest for B Modes from Inflationary Gravitational Waves, Annual Review of Astronomy and Astrophysics, (2016).
[Lewis] A. Lewis, CAMB Notes.
[Lewis, 2011] A. Lewis and A. Challinor, CAMB: Code for Anisotropies in the Microwave Background, Astrophysiscs Source Code Library, (2011).
[Penzias, 1965] A. Penzias and R. Wilson, A measurement of excess antenna temperature at 4080Mc/s., The Astrophysical Journal, vol. 142, pp. 419–421, (1965).
[PlanckI, 2014] Planck Collaboration, Planck 2013 results. I. Overview of products and scientific results, Astronomy & Astrophysics, (2014).
[PlanckXXIII, 2014] Planck Collaboration, Planck 2013 results. XXIII. Isotropy and statistics of the CMB, Astronomy Astrophysics, (2014).
[PlanckX, 2016] Planck Collaboration, Planck 2015 results. X. Diffuse component separation: Foreground maps, Astronomy Astrophysics, (2016).
[PlanckXXI, 2016] Planck Collaboration, Planck 2015 results. XXI. The integrated Sachs-Wolfe effect, Astronomy & Astrophysics, (2016).
[PlanckVI, 2018] Planck Collaboration, Planck 2018 results. VI. Cosmological parameters, Astronomy & Astrophysics, (2018).
[PlanckVII, 2018] Planck Collaboration, Planck 2018 results. VII. Isotropy and Statistics of the CMB, Astronomy & Astrophysics, (2018).
[PlanckVIII, 2018] Planck Collaboration, Planck 2018 results. VIII. Gravitational lensing, (2018).
[Planck, 2020] Planck Collaboration, Planck intermediate results, Astronomy & Astrophysics, (2020).
[PlanckVII, 2020] Planck Collaboration, Planck 2018 results. VII. Isotropy and Statistics of the CMB, Astronomy & Astrophysics, (2020).
[Samtleben, 2007] D. Samtleben, S. Staggs, and B. Winstein, The Cosmic Microwave Background for Pedestrians: A Review for Particle and Nuclear Physicists, Annual Review of Nuclear and Particle Science, (2007).
[Scott, 1994] D. Scott, M. Srednicki, and M. White, "Sample variance" in small-scale cosmic microwave background anisotropy experiments, The Astrophysical Journal, vol. 421, (1994).
[Smoot, 1992] George F. Smoot et. al., Structure in the COBE differential microwave radiometerfirst year maps, Astrophys. J. Lett., (1992).
[Thorne, 2017] Ben Thorne, Joanna Dunkley, David Alonso, and Sigurd Naess, The Python Sky Model:software for simulating the Galactic microwave sky, Monthly Notices of the Royal Astronomical Society, (2017).
[Tristram, 2017] M. Tristram, A. J. Banday, K. M. Górski, R. Keskitalo, C. R. Lawrence, K. J. Andersen,R. B. Barreiro, J. Borrill, H. K. Eriksen, R. Fernandez-Cobos, and et al., Planck constraints on the tensor-to-scalar ratio, Astronomy & Astrophysics, (2021).
[Wright, 1994] Wright, E. L. and Smoot, George F. and Bennett, C. L. and Lubin, P. M., Angular PowerSpectrum of the Microwave Background Anisotropy seen by the COBE Differential Microwave Radiometer, Astrophysics Journal, (1994).
[Zonca, 2019] Andrea Zonca, Leo Singer, Daniel Lenz, Martin Reinecke, Cyrille Rosset, Eric Hivon, and Krzysztof Gorski, healpy: equal area pixelization and spherical harmonics transformsfor data on the sphere in Python, Journal of Open Source Software, (2019).